Skip to content

Update 'byo' references based on playbook refactoring #7274

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 7, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions admin_solutions/master_node_config.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ xref:../admin_solutions/master_node_config.adoc#master-node-config-manual[manual
For this section, familiarity with Ansible is assumed.

Only a portion of the available host configuration options are
https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example[exposed to Ansible].
https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example[exposed to Ansible].
After an {product-title} install, Ansible creates an
inventory file with some substituted values. Modifying this inventory file and re-running the Ansible installer playbook is how you customize your {product-title} cluster.

Expand Down Expand Up @@ -145,15 +145,15 @@ openshift_master_htpasswd_users={'jsmith': '$apr1$wIwXkFLI$bAygtKGmPOqaJftB', 'b
. Re-run the ansible playbook for these modifications to take effect:
+
----
$ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/byo/config.yml
$ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/deploy_cluster.yml
----
+
The playbook updates the configuration, and restarts the {product-title} master service to apply the changes.

You have now modified the master and node configuration files using Ansible, but this is just a simple use case. From here you can see which
xref:../admin_solutions/master_node_config.adoc#master-config-options[master] and
xref:../admin_solutions/master_node_config.adoc#node-config-options[node configuration] options are
https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example[exposed to Ansible] and customize your own Ansible inventory.
https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example[exposed to Ansible] and customize your own Ansible inventory.

[[htpasswd]]
==== Using the `htpasswd` commmand
Expand Down Expand Up @@ -398,7 +398,7 @@ a|Controls limits and behavior for importing images:
- `*ScheduledImageImportMinimumIntervalSeconds*` (integer): The minimum number of seconds that can elapse between when image streams scheduled for background import are checked against the upstream repository. The default value is `900` (15 minutes).
- `*MaxScheduledImageImportsPerMinute*` (integer): The maximum number of image streams that can be imported in the background, per minute. The default value is `60`. This can be set to `-1` for unlimited imports.

https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example[This can be controlled with the Ansible inventory].
https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example[This can be controlled with the Ansible inventory].

|`*kubernetesMasterConfig*`
|Contains information about how to connect to kubelet's KubernetesMasterConfig. If present, then start the kubernetes master with this process.
Expand Down
8 changes: 4 additions & 4 deletions install_config/cluster_metrics.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -862,7 +862,7 @@ openshift_prometheus_node_selector={"region":"infra"}

Run the playbook:
----
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml
----

[[openshift-prometheus-additional-deploy]]
Expand All @@ -886,7 +886,7 @@ openshift_prometheus_node_selector={"${KEY}":"${VALUE}"}

Run the playbook:
----
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml
----

*Deploy Using a Non-default Namespace*
Expand All @@ -902,7 +902,7 @@ openshift_prometheus_namespace=${USER_PROJECT}

Run the playbook:
----
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml
----

[[openshift-prometheus-web]]
Expand Down Expand Up @@ -1037,5 +1037,5 @@ gathered from the `http://${POD_IP}:7575/metrics` endpoint.
To undeploy Prometheus, run:

----
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml -e openshift_prometheus_state=absent
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml -e openshift_prometheus_state=absent
----
47 changes: 25 additions & 22 deletions install_config/install/advanced_install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2309,11 +2309,11 @@ other than *_/etc/ansible/hosts_*:
----
ifdef::openshift-enterprise[]
# ansible-playbook [-i /path/to/inventory] \
/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
endif::[]
ifdef::openshift-origin[]
# ansible-playbook [-i /path/to/inventory] \
~/openshift-ansible/playbooks/byo/config.yml
~/openshift-ansible/playbooks/deploy_cluster.yml
endif::[]
----

Expand Down Expand Up @@ -2379,7 +2379,7 @@ or workarounds.

You can use the `PLAYBOOK_FILE` environment variable to specify other playbooks
you want to run by using the containerized installer. The default value of the `PLAYBOOK_FILE` is
*_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the
*_/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml_*, which is the
main cluster installation playbook, but you can set it to the path of another
playbook inside the container.

Expand All @@ -2391,7 +2391,7 @@ installation, use the following command:
# atomic install --system \
--storage=ostree \
--set INVENTORY_FILE=/path/to/inventory \
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ <1>
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml \ <1>
--set OPTS="-v" \ <2>
ifdef::openshift-enterprise[]
registry.access.redhat.com/openshift3/ose-ansible:v3.7
Expand Down Expand Up @@ -2436,7 +2436,7 @@ $ docker run -t -u `id -u` \ <1>
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2>
-v $HOME/ansible/hosts:/tmp/inventory:Z \ <3>
-e INVENTORY_FILE=/tmp/inventory \ <3>
-e PLAYBOOK_FILE=playbooks/byo/config.yml \ <4>
-e PLAYBOOK_FILE=playbooks/deploy_cluster.yml \ <4>
-e OPTS="-v" \ <5>
ifdef::openshift-enterprise[]
registry.access.redhat.com/openshift3/ose-ansible:v3.7
Expand Down Expand Up @@ -2481,8 +2481,8 @@ The inventory file can also be downloaded from a web server if you specify
the `INVENTORY_URL` environment variable, or generated dynamically using
`DYNAMIC_SCRIPT_URL` to specify an executable script that provides a
dynamic inventory.
<4> `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
to run (in this example, the BYO installer) as a relative path from the
<4> `-e PLAYBOOK_FILE=playbooks/deploy_cluster.yml` specifies the playbook
to run (in this example, the default installer) as a relative path from the
top level directory of *openshift-ansible* content. The full path from the
RPM can also be used, as well as the path to any other playbook file in
the container.
Expand All @@ -2493,7 +2493,7 @@ inside the container.
[[running-the-advanced-installation-individual-components]]
=== Running Individual Component Playbooks

The main installation playbook *_{pb-prefix}playbooks/byo/config.yml_* runs a
The main installation playbook *_{pb-prefix}playbooks/deploy_cluster.yml_* runs a
set of individual component playbooks in a specific order, and the installer
reports back at the end what phases you have gone through. If the installation
fails during a phase, you are notified on the screen along with the errors from
Expand All @@ -2516,46 +2516,49 @@ playbook is run:
|Playbook Name |File Location

|Health Check
|*_{pb-prefix}playbooks/byo/openshift-checks/pre-install.yml_*
|*_{pb-prefix}playbooks/openshift-checks/pre-install.yml_*

|etcd Install
|*_{pb-prefix}playbooks/byo/openshift-etcd/config.yml_*
|*_{pb-prefix}playbooks/openshift-etcd/config.yml_*

|NFS Install
|*_{pb-prefix}playbooks/byo/openshift-nfs/config.yml_*
|*_{pb-prefix}playbooks/openshift-nfs/config.yml_*

|Load Balancer Install
|*_{pb-prefix}playbooks/byo/openshift-loadbalancer/config.yml_*
|*_{pb-prefix}playbooks/openshift-loadbalancer/config.yml_*

|Master Install
|*_{pb-prefix}playbooks/byo/openshift-master/config.yml_*
|*_{pb-prefix}playbooks/openshift-master/config.yml_*

|Master Additional Install
|*_{pb-prefix}playbooks/byo/openshift-master/additional_config.yml_*
|*_{pb-prefix}playbooks/openshift-master/additional_config.yml_*

|Node Install
|*_{pb-prefix}playbooks/byo/openshift-node/config.yml_*
|*_{pb-prefix}playbooks/openshift-node/config.yml_*

|GlusterFS Install
|*_{pb-prefix}playbooks/byo/openshift-glusterfs/config.yml_*
|*_{pb-prefix}playbooks/openshift-glusterfs/config.yml_*

|Hosted Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-hosted.yml_*
|*_{pb-prefix}playbooks/openshift-hosted/config.yml_*

|Web Console Install
|*_{pb-prefix}playbooks/openshift-web-console/config.yml_*

|Metrics Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-metrics.yml_*
|*_{pb-prefix}playbooks/openshift-metrics/config.yml_*

|Logging Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-logging.yml_*
|*_{pb-prefix}playbooks/openshift-logging/config.yml_*

|Prometheus Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-prometheus.yml_*
|*_{pb-prefix}playbooks/openshift-prometheus/config.yml_*

|Service Catalog Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/service-catalog.yml_*
|*_{pb-prefix}playbooks/openshift-service-catalog/config.yml_*

|Management Install
|*_{pb-prefix}playbooks/byo/openshift-management/config.yml_*
|*_{pb-prefix}playbooks/openshift-management/config.yml_*
|===

[[advanced-verifying-the-installation]]
Expand Down
2 changes: 1 addition & 1 deletion install_config/install/stand_alone_registry.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ After you have configured Ansible by defining an inventory file in
following playbook:

----
# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
----

[NOTE]
Expand Down
16 changes: 8 additions & 8 deletions install_config/redeploying_certificates.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@ To redeploy master, etcd, and node certificates using the current

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/redeploy-certificates.yml
----

[[redeploying-new-custom-ca]]
Expand Down Expand Up @@ -336,7 +336,7 @@ step.
+
----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-openshift-ca.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-master/redeploy-openshift-ca.yml
----

With the new {product-title} CA in place, you can then use the
Expand Down Expand Up @@ -366,7 +366,7 @@ To redeploy a newly generated etcd CA:
+
----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-etcd-ca.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/redeploy-ca.yml
----

With the new etcd CA in place, you can then use the
Expand All @@ -385,7 +385,7 @@ file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-master-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-master/redeploy-certificates.yml
----

[[redeploying-etcd-certificates]]
Expand All @@ -404,7 +404,7 @@ file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-etcd-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/redeploy-certificates.yml
----

[[redeploying-node-certificates]]
Expand All @@ -418,7 +418,7 @@ file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-node-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-node/redeploy-certificates.yml
----

[[redeploying-registry-router-certificates]]
Expand All @@ -439,7 +439,7 @@ inventory file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-registry-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/redeploy-registry-certificates.yml
----

[[redeploying-router-certificates]]
Expand All @@ -450,7 +450,7 @@ inventory file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-router-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/redeploy-router-certificates.yml
----

[[redeploying-custom-registry-or-router-certificates]]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,13 @@ $ gluster volume info
== Dynamically Provision a Volume
[NOTE]
====
If you installed {product-title} by using the link:https://github.com/openshift/openshift-ansible/tree/master/inventory/byo[BYO (Bring your own) OpenShift Ansible inventory configuration files] for either link:https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.byo.glusterfs.native.example[native] or link:https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.byo.glusterfs.external.example[external] GlusterFS instance, the GlusterFS StorageClass automatically get created during the installation. For such cases you can skip the following storage class creation steps and directly proceed with creating persistent volume claim instruction.
If you installed {product-title} by using the
link:https://github.com/openshift/openshift-ansible/tree/master/inventory/[OpenShift Ansible example inventory configuration files] for either
link:https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.native.example[native] or
link:https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.external.example[external]
GlusterFS instance, the GlusterFS StorageClass automatically gets created during
the installation. For such cases you can skip the following storage class creation
steps and directly proceed with creating persistent volume claim instruction.
====

. Create a `StorageClass` object definition. The following definition is based on the
Expand Down
2 changes: 1 addition & 1 deletion install_config/upgrading/automated_upgrades.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -559,7 +559,7 @@ xref:../../install_config/install/advanced_install.adoc#install-config-install-a
+
----
# ansible-playbook -i </path/to/inventory/file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/service-catalog.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-service-catalog/config.yml
----
// end::automated-service-catalog-upgrade-steps[]

Expand Down