Skip to content

Commit d85436f

Browse files
committed
advanced_install: tweaks per pier review
1 parent 7c24c89 commit d85436f

File tree

1 file changed

+35
-42
lines changed

1 file changed

+35
-42
lines changed

install_config/install/advanced_install.adoc

Lines changed: 35 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -2328,7 +2328,7 @@ concern for the install restarting *docker* on the host.
23282328
----
23292329
# atomic install --system \
23302330
--storage=ostree \
2331-
--set INVENTORY_FILE=/path/to/inventory \//<1>
2331+
--set INVENTORY_FILE=/path/to/inventory \ <1>
23322332
ifdef::openshift-enterprise[]
23332333
registry.access.redhat.com/openshift3/ose-ansible:v3.7
23342334
endif::[]
@@ -2338,11 +2338,12 @@ endif::[]
23382338
----
23392339
<1> Specify the location on the local host for your inventory file.
23402340
+
2341-
This command initiates the cluster installation using the inventory file specified and the `root` user's
2342-
SSH configuration, logging to the terminal as well as *_/var/log/ansible.log_*. The first time this is executed, the image is imported into
2341+
This command initiates the cluster installation by using the inventory file specified and the `root` user's
2342+
SSH configuration. It logs the output on the terminal and also saves it in the *_/var/log/ansible.log_* file.
2343+
The first time this command is run, the image is imported into
23432344
link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree]
2344-
storage (system containers use this instead of *docker* daemon storage).
2345-
Later executions re-use the stored image.
2345+
storage (system containers use this rather than *docker* daemon storage).
2346+
On subsequent runs, it reuses the stored image.
23462347
+
23472348
If for any reason the installation fails, before re-running the installer, see
23482349
xref:installer-known-issues[Known Issues] to check for any specific instructions
@@ -2351,22 +2352,22 @@ or workarounds.
23512352
[[running-the-advanced-installation-system-container-other-playbooks]]
23522353
==== Running Other Playbooks
23532354

2354-
To run any other playbooks (for example, to run the
2355-
xref:configuring-cluster-pre-install-checks[pre-install checks]
2356-
before proceeding to install) using the containerized installer, you can
2357-
specify a playbook with the `PLAYBOOK_FILE` environment variable. The
2358-
default value is *_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the main cluster
2359-
installation playbook, but you can set it to the path of another playbook
2360-
inside the container.
2355+
You can use the `PLAYBOOK_FILE` environment variable to specify other playbooks
2356+
you want to run by using the containerized installer. The default value of the `PLAYBOOK_FILE` is
2357+
*_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the
2358+
main cluster installation playbook, but you can set it to the path of another
2359+
playbook inside the container.
23612360

2362-
For example:
2361+
For example, to run the
2362+
xref:configuring-cluster-pre-install-checks[pre-install checks] playbook before
2363+
installation, use the following command:
23632364

23642365
----
23652366
# atomic install --system \
23662367
--storage=ostree \
23672368
--set INVENTORY_FILE=/path/to/inventory \
2368-
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \//<1>
2369-
--set OPTS="-v" \//<2>
2369+
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ <1>
2370+
--set OPTS="-v" \ <2>
23702371
ifdef::openshift-enterprise[]
23712372
registry.access.redhat.com/openshift3/ose-ansible:v3.7
23722373
endif::[]
@@ -2386,9 +2387,8 @@ The installer image can also run as a *docker* container anywhere that *docker*
23862387

23872388
[WARNING]
23882389
====
2389-
This method should not be used to run the installer on one of the hosts being configured,
2390-
as the install typically restarts *docker* on the host, which would disrupt the installer
2391-
container's own execution.
2390+
This method must not be used to run the installer on one of the hosts being configured,
2391+
as the install may restart *docker* on the host, disrupting the installer container execution.
23922392
====
23932393

23942394
[NOTE]
@@ -2399,60 +2399,55 @@ run with different entry points and contexts, so runtime parameters are not the
23992399

24002400
At a minimum, when running the installer as a *docker* container you must provide:
24012401

2402-
* SSH key(s) so that Ansible can reach your hosts.
2402+
* SSH key(s), so that Ansible can reach your hosts.
24032403
* An Ansible inventory file.
2404-
* The location of an Ansible playbook to run against that inventory.
2404+
* The location of the Ansible playbook to run against that inventory.
24052405

24062406
Here is an example of how to run an install via *docker*.
24072407
Note that this must be run by a non-`root` user with access to *docker*.
24082408

24092409
----
2410-
$ docker run -t -u `id -u` \
2411-
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \
2412-
-v $HOME/ansible/hosts:/tmp/inventory:Z \
2413-
-e INVENTORY_FILE=/tmp/inventory \
2414-
-e PLAYBOOK_FILE=playbooks/byo/config.yml \
2415-
-e OPTS="-v" \
2410+
$ docker run -t -u `id -u` \ <1>
2411+
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2>
2412+
-v $HOME/ansible/hosts:/tmp/inventory:Z \ <3>
2413+
-e INVENTORY_FILE=/tmp/inventory \ <3>
2414+
-e PLAYBOOK_FILE=playbooks/byo/config.yml \ <4>
2415+
-e OPTS="-v" \ <5>
24162416
ifdef::openshift-enterprise[]
24172417
registry.access.redhat.com/openshift3/ose-ansible:v3.7
24182418
endif::[]
24192419
ifdef::openshift-origin[]
24202420
docker.io/openshift/origin-ansible:v3.7
24212421
endif::[]
24222422
----
2423-
2424-
The following is a detailed explanation of the options used in the command above.
2425-
2426-
* `-u `id -u`` makes the container run with the same UID as the current
2423+
<1> `-u `id -u`` makes the container run with the same UID as the current
24272424
user, which allows that user to use the SSH key inside the container (SSH
24282425
private keys are expected to be readable only by their owner).
2429-
2430-
* `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
2426+
<2> `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
24312427
SSH key (`$HOME/.ssh/id_rsa`) under the container user's `$HOME/.ssh`
24322428
(*_/opt/app-root/src_* is the `$HOME` of the user in the container). If
24332429
you mount the SSH key into a non-standard location you can add an
24342430
environment variable with `-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point`
24352431
or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in
24362432
the inventory to point Ansible at it.
24372433
+
2438-
Note that the SSH key is mounted with the `:Z` flag: this is
2434+
Note that the SSH key is mounted with the `:Z` flag. This is
24392435
required so that the container can read the SSH key under
2440-
its restricted SELinux context; this also means that your
2436+
its restricted SELinux context. This also means that your
24412437
original SSH key file will be re-labeled to something like
24422438
`system_u:object_r:container_file_t:s0:c113,c247`. For more details
24432439
about `:Z` please check the `docker-run(1)` man page. Keep this in mind
2444-
when providing these volume mount specifications because this could
2440+
when providing these volume mount specifications because this might
24452441
have unexpected consequences: for example, if you mount (and therefore
24462442
re-label) your whole `$HOME/.ssh` directory it will block the host's
24472443
*sshd* from accessing your public keys to login. For this reason you
24482444
may want to use a separate copy of the SSH key (or directory), so that
24492445
the original file labels remain untouched.
2450-
2451-
* `-v $HOME/ansible/hosts:/tmp/inventory:Z` and `-e INVENTORY_FILE=/tmp/inventory`
2446+
<3> `-v $HOME/ansible/hosts:/tmp/inventory:Z` and `-e INVENTORY_FILE=/tmp/inventory`
24522447
mount a static Ansible inventory file into the container as
24532448
*_/tmp/inventory_* and set the corresponding environment variable to
24542449
point at it. As with the SSH key, the inventory file SELinux labels may
2455-
need to be relabeled via the `:Z` flag to allow reading in the container,
2450+
need to be relabeled by using the `:Z` flag to allow reading in the container,
24562451
depending on the existing label (for files in a user `$HOME` directory
24572452
this is likely to be needed). So again you may prefer to copy the
24582453
inventory to a dedicated location before mounting it.
@@ -2461,14 +2456,12 @@ The inventory file can also be downloaded from a web server if you specify
24612456
the `INVENTORY_URL` environment variable, or generated dynamically using
24622457
`DYNAMIC_SCRIPT_URL` to specify an executable script that provides a
24632458
dynamic inventory.
2464-
2465-
* `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
2459+
<4> `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
24662460
to run (in this example, the BYO installer) as a relative path from the
24672461
top level directory of *openshift-ansible* content. The full path from the
24682462
RPM can also be used, as well as the path to any other playbook file in
24692463
the container.
2470-
2471-
* `-e OPTS="-v"` supplies arbitrary command line options (in this case,
2464+
<5> `-e OPTS="-v"` supplies arbitrary command line options (in this case,
24722465
`-v` to increase verbosity) to the `ansible-playbook` command that runs
24732466
inside the container.
24742467

0 commit comments

Comments
 (0)