@@ -2328,7 +2328,7 @@ concern for the install restarting *docker* on the host.
2328
2328
----
2329
2329
# atomic install --system \
2330
2330
--storage=ostree \
2331
- --set INVENTORY_FILE=/path/to/inventory \// <1>
2331
+ --set INVENTORY_FILE=/path/to/inventory \ <1>
2332
2332
ifdef::openshift-enterprise[]
2333
2333
registry.access.redhat.com/openshift3/ose-ansible:v3.7
2334
2334
endif::[]
@@ -2338,11 +2338,12 @@ endif::[]
2338
2338
----
2339
2339
<1> Specify the location on the local host for your inventory file.
2340
2340
+
2341
- This command initiates the cluster installation using the inventory file specified and the `root` user's
2342
- SSH configuration, logging to the terminal as well as *_/var/log/ansible.log_*. The first time this is executed, the image is imported into
2341
+ This command initiates the cluster installation by using the inventory file specified and the `root` user's
2342
+ SSH configuration. It logs the output on the terminal and also saves it in the *_/var/log/ansible.log_* file.
2343
+ The first time this command is run, the image is imported into
2343
2344
link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree]
2344
- storage (system containers use this instead of *docker* daemon storage).
2345
- Later executions re-use the stored image.
2345
+ storage (system containers use this rather than *docker* daemon storage).
2346
+ On subsequent runs, it reuses the stored image.
2346
2347
+
2347
2348
If for any reason the installation fails, before re-running the installer, see
2348
2349
xref:installer-known-issues[Known Issues] to check for any specific instructions
@@ -2351,22 +2352,22 @@ or workarounds.
2351
2352
[[running-the-advanced-installation-system-container-other-playbooks]]
2352
2353
==== Running Other Playbooks
2353
2354
2354
- To run any other playbooks (for example, to run the
2355
- xref:configuring-cluster-pre-install-checks[pre-install checks]
2356
- before proceeding to install) using the containerized installer, you can
2357
- specify a playbook with the `PLAYBOOK_FILE` environment variable. The
2358
- default value is *_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the main cluster
2359
- installation playbook, but you can set it to the path of another playbook
2360
- inside the container.
2355
+ You can use the `PLAYBOOK_FILE` environment variable to specify other playbooks
2356
+ you want to run by using the containerized installer. The default value of the `PLAYBOOK_FILE` is
2357
+ *_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the
2358
+ main cluster installation playbook, but you can set it to the path of another
2359
+ playbook inside the container.
2361
2360
2362
- For example:
2361
+ For example, to run the
2362
+ xref:configuring-cluster-pre-install-checks[pre-install checks] playbook before
2363
+ installation, use the following command:
2363
2364
2364
2365
----
2365
2366
# atomic install --system \
2366
2367
--storage=ostree \
2367
2368
--set INVENTORY_FILE=/path/to/inventory \
2368
- --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \// <1>
2369
- --set OPTS="-v" \// <2>
2369
+ --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ <1>
2370
+ --set OPTS="-v" \ <2>
2370
2371
ifdef::openshift-enterprise[]
2371
2372
registry.access.redhat.com/openshift3/ose-ansible:v3.7
2372
2373
endif::[]
@@ -2386,9 +2387,8 @@ The installer image can also run as a *docker* container anywhere that *docker*
2386
2387
2387
2388
[WARNING]
2388
2389
====
2389
- This method should not be used to run the installer on one of the hosts being configured,
2390
- as the install typically restarts *docker* on the host, which would disrupt the installer
2391
- container's own execution.
2390
+ This method must not be used to run the installer on one of the hosts being configured,
2391
+ as the install may restart *docker* on the host, disrupting the installer container execution.
2392
2392
====
2393
2393
2394
2394
[NOTE]
@@ -2399,60 +2399,55 @@ run with different entry points and contexts, so runtime parameters are not the
2399
2399
2400
2400
At a minimum, when running the installer as a *docker* container you must provide:
2401
2401
2402
- * SSH key(s) so that Ansible can reach your hosts.
2402
+ * SSH key(s), so that Ansible can reach your hosts.
2403
2403
* An Ansible inventory file.
2404
- * The location of an Ansible playbook to run against that inventory.
2404
+ * The location of the Ansible playbook to run against that inventory.
2405
2405
2406
2406
Here is an example of how to run an install via *docker*.
2407
2407
Note that this must be run by a non-`root` user with access to *docker*.
2408
2408
2409
2409
----
2410
- $ docker run -t -u `id -u` \
2411
- -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \
2412
- -v $HOME/ansible/hosts:/tmp/inventory:Z \
2413
- -e INVENTORY_FILE=/tmp/inventory \
2414
- -e PLAYBOOK_FILE=playbooks/byo/config.yml \
2415
- -e OPTS="-v" \
2410
+ $ docker run -t -u `id -u` \ <1>
2411
+ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2>
2412
+ -v $HOME/ansible/hosts:/tmp/inventory:Z \ <3>
2413
+ -e INVENTORY_FILE=/tmp/inventory \ <3>
2414
+ -e PLAYBOOK_FILE=playbooks/byo/config.yml \ <4>
2415
+ -e OPTS="-v" \ <5>
2416
2416
ifdef::openshift-enterprise[]
2417
2417
registry.access.redhat.com/openshift3/ose-ansible:v3.7
2418
2418
endif::[]
2419
2419
ifdef::openshift-origin[]
2420
2420
docker.io/openshift/origin-ansible:v3.7
2421
2421
endif::[]
2422
2422
----
2423
-
2424
- The following is a detailed explanation of the options used in the command above.
2425
-
2426
- * `-u `id -u`` makes the container run with the same UID as the current
2423
+ <1> `-u `id -u`` makes the container run with the same UID as the current
2427
2424
user, which allows that user to use the SSH key inside the container (SSH
2428
2425
private keys are expected to be readable only by their owner).
2429
-
2430
- * `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
2426
+ <2> `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
2431
2427
SSH key (`$HOME/.ssh/id_rsa`) under the container user's `$HOME/.ssh`
2432
2428
(*_/opt/app-root/src_* is the `$HOME` of the user in the container). If
2433
2429
you mount the SSH key into a non-standard location you can add an
2434
2430
environment variable with `-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point`
2435
2431
or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in
2436
2432
the inventory to point Ansible at it.
2437
2433
+
2438
- Note that the SSH key is mounted with the `:Z` flag: this is
2434
+ Note that the SSH key is mounted with the `:Z` flag. This is
2439
2435
required so that the container can read the SSH key under
2440
- its restricted SELinux context; this also means that your
2436
+ its restricted SELinux context. This also means that your
2441
2437
original SSH key file will be re-labeled to something like
2442
2438
`system_u:object_r:container_file_t:s0:c113,c247`. For more details
2443
2439
about `:Z` please check the `docker-run(1)` man page. Keep this in mind
2444
- when providing these volume mount specifications because this could
2440
+ when providing these volume mount specifications because this might
2445
2441
have unexpected consequences: for example, if you mount (and therefore
2446
2442
re-label) your whole `$HOME/.ssh` directory it will block the host's
2447
2443
*sshd* from accessing your public keys to login. For this reason you
2448
2444
may want to use a separate copy of the SSH key (or directory), so that
2449
2445
the original file labels remain untouched.
2450
-
2451
- * `-v $HOME/ansible/hosts:/tmp/inventory:Z` and `-e INVENTORY_FILE=/tmp/inventory`
2446
+ <3> `-v $HOME/ansible/hosts:/tmp/inventory:Z` and `-e INVENTORY_FILE=/tmp/inventory`
2452
2447
mount a static Ansible inventory file into the container as
2453
2448
*_/tmp/inventory_* and set the corresponding environment variable to
2454
2449
point at it. As with the SSH key, the inventory file SELinux labels may
2455
- need to be relabeled via the `:Z` flag to allow reading in the container,
2450
+ need to be relabeled by using the `:Z` flag to allow reading in the container,
2456
2451
depending on the existing label (for files in a user `$HOME` directory
2457
2452
this is likely to be needed). So again you may prefer to copy the
2458
2453
inventory to a dedicated location before mounting it.
@@ -2461,14 +2456,12 @@ The inventory file can also be downloaded from a web server if you specify
2461
2456
the `INVENTORY_URL` environment variable, or generated dynamically using
2462
2457
`DYNAMIC_SCRIPT_URL` to specify an executable script that provides a
2463
2458
dynamic inventory.
2464
-
2465
- * `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
2459
+ <4> `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
2466
2460
to run (in this example, the BYO installer) as a relative path from the
2467
2461
top level directory of *openshift-ansible* content. The full path from the
2468
2462
RPM can also be used, as well as the path to any other playbook file in
2469
2463
the container.
2470
-
2471
- * `-e OPTS="-v"` supplies arbitrary command line options (in this case,
2464
+ <5> `-e OPTS="-v"` supplies arbitrary command line options (in this case,
2472
2465
`-v` to increase verbosity) to the `ansible-playbook` command that runs
2473
2466
inside the container.
2474
2467
0 commit comments