@@ -2305,115 +2305,174 @@ If for any reason the installation fails, before re-running the installer, see
2305
2305
xref:installer-known-issues[Known Issues] to check for any specific
2306
2306
instructions or workarounds.
2307
2307
2308
- [[running-the-advanced-installation-system-container ]]
2308
+ [[running-the-advanced-installation-containerized ]]
2309
2309
=== Running the Containerized Installer
2310
2310
2311
- include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview]
2312
-
2313
2311
The
2314
2312
ifdef::openshift-enterprise[]
2315
2313
*openshift3/ose-ansible*
2316
2314
endif::[]
2317
2315
ifdef::openshift-origin[]
2318
2316
*openshift/origin-ansible*
2319
2317
endif::[]
2320
- image is a containerized version of the {product-title} installer that runs as a
2321
- link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container]. System containers are stored and run outside of the traditional
2322
- *docker* service. Functionally, using the containerized installer is the same as
2323
- using the traditional RPM-based installer, except it is running in a
2324
- containerized environment instead of directly on the host.
2318
+ image is a containerized version of the {product-title} installer.
2319
+ This installer image provides the same functionality as the RPM-based
2320
+ installer, but it runs in a containerized environment that provides all
2321
+ of its dependencies rather than being installed directly on the host.
2322
+ The only requirement to use it is the ability to run a container.
2323
+
2324
+ [[running-the-advanced-installation-system-container]]
2325
+ ==== Running the Installer as a System Container
2326
+
2327
+ include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview]
2325
2328
2326
- . Use the Docker CLI to pull the image locally:
2329
+ The installer image can be used as a
2330
+ link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container].
2331
+ System containers are stored and run outside of the traditional *docker* service.
2332
+ This enables running the installer image from one of the target hosts without
2333
+ concern for the install restarting *docker* on the host.
2334
+
2335
+ . As the `root` user, use the Atomic CLI to run the installer as a run-once system container:
2327
2336
+
2328
2337
----
2338
+ # atomic install --system \
2339
+ --storage=ostree \
2340
+ --set INVENTORY_FILE=/path/to/inventory \ <1>
2329
2341
ifdef::openshift-enterprise[]
2330
- $ docker pull registry.access.redhat.com/openshift3/ose-ansible:v3.7
2342
+ registry.access.redhat.com/openshift3/ose-ansible:v3.7
2331
2343
endif::[]
2332
2344
ifdef::openshift-origin[]
2333
- $ docker pull docker.io/openshift/origin-ansible:v3.7
2345
+ docker.io/openshift/origin-ansible:v3.7
2334
2346
endif::[]
2335
2347
----
2336
-
2337
- . The installer system container must be stored in
2348
+ <1> Specify the location on the local host for your inventory file.
2349
+ +
2350
+ This command initiates the cluster installation by using the inventory file specified and the `root` user's
2351
+ SSH configuration. It logs the output on the terminal and also saves it in the *_/var/log/ansible.log_* file.
2352
+ The first time this command is run, the image is imported into
2338
2353
link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree]
2339
- instead of defaulting to *docker* daemon storage. Use the Atomic CLI to import
2340
- the installer image from the local *docker* engine to OSTree storage:
2354
+ storage (system containers use this rather than *docker* daemon storage).
2355
+ On subsequent runs, it reuses the stored image.
2341
2356
+
2342
- ----
2343
- $ atomic pull --storage ostree \
2344
- ifdef::openshift-enterprise[]
2345
- docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
2346
- endif::[]
2347
- ifdef::openshift-origin[]
2348
- docker:docker.io/openshift/origin-ansible:v3.7
2349
- endif::[]
2350
- ----
2357
+ If for any reason the installation fails, before re-running the installer, see
2358
+ xref:installer-known-issues[Known Issues] to check for any specific instructions
2359
+ or workarounds.
2360
+
2361
+ [[running-the-advanced-installation-system-container-other-playbooks]]
2362
+ ==== Running Other Playbooks
2363
+
2364
+ You can use the `PLAYBOOK_FILE` environment variable to specify other playbooks
2365
+ you want to run by using the containerized installer. The default value of the `PLAYBOOK_FILE` is
2366
+ *_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the
2367
+ main cluster installation playbook, but you can set it to the path of another
2368
+ playbook inside the container.
2369
+
2370
+ For example, to run the
2371
+ xref:configuring-cluster-pre-install-checks[pre-install checks] playbook before
2372
+ installation, use the following command:
2351
2373
2352
- . Install the system container so it is set up as a systemd service:
2353
- +
2354
2374
----
2355
- $ atomic install --system \
2375
+ # atomic install --system \
2356
2376
--storage=ostree \
2357
- --name=openshift-installer \//<1>
2358
- --set INVENTORY_FILE=/path/to/inventory \//<2>
2377
+ --set INVENTORY_FILE=/path/to/inventory \
2378
+ --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ <1>
2379
+ --set OPTS="-v" \ <2>
2359
2380
ifdef::openshift-enterprise[]
2360
- docker: registry.access.redhat.com/openshift3/ose-ansible:v3.7
2381
+ registry.access.redhat.com/openshift3/ose-ansible:v3.7
2361
2382
endif::[]
2362
2383
ifdef::openshift-origin[]
2363
- docker:docker .io/openshift/origin-ansible:v3.7
2384
+ docker.io/openshift/origin-ansible:v3.7
2364
2385
endif::[]
2365
2386
----
2366
- <1> Sets the name for the systemd service.
2367
- <2> Specify the location for your inventory file on your local workstation.
2387
+ <1> Set `PLAYBOOK_FILE` to the full path of the playbook starting at the
2388
+ *_playbooks/_* directory. Playbooks are located in the same locations as with
2389
+ the RPM-based installer.
2390
+ <2> Set `OPTS` to add command line options to `ansible-playbook`.
2368
2391
2369
- . Use the `systemctl` command to start the installer service as you would any
2370
- other systemd service. This command initiates the cluster installation:
2371
- +
2372
- ----
2373
- $ systemctl start openshift-installer
2374
- ----
2375
- +
2376
- If for any reason the installation fails, before re-running the installer, see
2377
- xref:installer-known-issues[Known Issues] to check for any specific instructions
2378
- or workarounds.
2392
+ [[running-the-advanced-installation-docker]]
2393
+ ==== Running the Installer as a Docker Container
2379
2394
2380
- . After the installation completes, you can uninstall the system container if you want. However, if you need to run the installer again to run any other playbooks later, you would have to follow this procedure again.
2381
- +
2382
- To uninstall the system container:
2383
- +
2384
- ----
2385
- $ atomic uninstall openshift-installer
2386
- ----
2395
+ The installer image can also run as a *docker* container anywhere that *docker* can run.
2387
2396
2388
- [[running-the-advanced-installation-system-container-other-playbooks]]
2389
- ==== Running Other Playbooks
2397
+ [WARNING]
2398
+ ====
2399
+ This method must not be used to run the installer on one of the hosts being configured,
2400
+ as the install may restart *docker* on the host, disrupting the installer container execution.
2401
+ ====
2402
+
2403
+ [NOTE]
2404
+ ====
2405
+ Although this method and the system container method above use the same image, they
2406
+ run with different entry points and contexts, so runtime parameters are not the same.
2407
+ ====
2390
2408
2391
- After you have completed the cluster installation, if you want to later run any
2392
- other playbooks using the containerized installer (for example, cluster upgrade
2393
- playbooks), you can use the `PLAYBOOK_FILE` environment variable. The default
2394
- value is `playbooks/byo/config.yml`, which is the main cluster installation
2395
- playbook, but you can set it to the path of another playbook inside the
2396
- container.
2409
+ At a minimum, when running the installer as a *docker* container you must provide:
2397
2410
2398
- For example:
2411
+ * SSH key(s), so that Ansible can reach your hosts.
2412
+ * An Ansible inventory file.
2413
+ * The location of the Ansible playbook to run against that inventory.
2414
+
2415
+ Here is an example of how to run an install via *docker*.
2416
+ Note that this must be run by a non-`root` user with access to *docker*.
2399
2417
2400
2418
----
2401
- $ atomic install --system \
2402
- --storage=ostree \
2403
- --name=openshift-installer \
2404
- --set INVENTORY_FILE=/etc/ansible/hosts \
2405
- --set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade.yml \//<1>
2419
+ $ docker run -t -u `id -u` \ <1>
2420
+ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2>
2421
+ -v $HOME/ansible/hosts:/tmp/inventory:Z \ <3>
2422
+ -e INVENTORY_FILE=/tmp/inventory \ <3>
2423
+ -e PLAYBOOK_FILE=playbooks/byo/config.yml \ <4>
2424
+ -e OPTS="-v" \ <5>
2406
2425
ifdef::openshift-enterprise[]
2407
- docker: registry.access.redhat.com/openshift3/ose-ansible:v3.7
2426
+ registry.access.redhat.com/openshift3/ose-ansible:v3.7
2408
2427
endif::[]
2409
2428
ifdef::openshift-origin[]
2410
- docker:docker .io/openshift/origin-ansible:v3.7
2429
+ docker.io/openshift/origin-ansible:v3.7
2411
2430
endif::[]
2412
2431
----
2413
- <1> Set `PLAYBOOK_FILE` to the relative path of the playbook starting at the
2414
- *_playbooks/_* directory. Playbooks mentioned elsewhere in {product-title}
2415
- documentation assume use of the RPM-based installer, so use this relative path
2416
- instead when using the containerized installer.
2432
+ <1> `-u `id -u`` makes the container run with the same UID as the current
2433
+ user, which allows that user to use the SSH key inside the container (SSH
2434
+ private keys are expected to be readable only by their owner).
2435
+ <2> `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
2436
+ SSH key (`$HOME/.ssh/id_rsa`) under the container user's `$HOME/.ssh`
2437
+ (*_/opt/app-root/src_* is the `$HOME` of the user in the container). If
2438
+ you mount the SSH key into a non-standard location you can add an
2439
+ environment variable with `-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point`
2440
+ or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in
2441
+ the inventory to point Ansible at it.
2442
+ +
2443
+ Note that the SSH key is mounted with the `:Z` flag. This is
2444
+ required so that the container can read the SSH key under
2445
+ its restricted SELinux context. This also means that your
2446
+ original SSH key file will be re-labeled to something like
2447
+ `system_u:object_r:container_file_t:s0:c113,c247`. For more details
2448
+ about `:Z` please check the `docker-run(1)` man page. Keep this in mind
2449
+ when providing these volume mount specifications because this might
2450
+ have unexpected consequences: for example, if you mount (and therefore
2451
+ re-label) your whole `$HOME/.ssh` directory it will block the host's
2452
+ *sshd* from accessing your public keys to login. For this reason you
2453
+ may want to use a separate copy of the SSH key (or directory), so that
2454
+ the original file labels remain untouched.
2455
+ <3> `-v $HOME/ansible/hosts:/tmp/inventory:Z` and `-e INVENTORY_FILE=/tmp/inventory`
2456
+ mount a static Ansible inventory file into the container as
2457
+ *_/tmp/inventory_* and set the corresponding environment variable to
2458
+ point at it. As with the SSH key, the inventory file SELinux labels may
2459
+ need to be relabeled by using the `:Z` flag to allow reading in the container,
2460
+ depending on the existing label (for files in a user `$HOME` directory
2461
+ this is likely to be needed). So again you may prefer to copy the
2462
+ inventory to a dedicated location before mounting it.
2463
+ +
2464
+ The inventory file can also be downloaded from a web server if you specify
2465
+ the `INVENTORY_URL` environment variable, or generated dynamically using
2466
+ `DYNAMIC_SCRIPT_URL` to specify an executable script that provides a
2467
+ dynamic inventory.
2468
+ <4> `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
2469
+ to run (in this example, the BYO installer) as a relative path from the
2470
+ top level directory of *openshift-ansible* content. The full path from the
2471
+ RPM can also be used, as well as the path to any other playbook file in
2472
+ the container.
2473
+ <5> `-e OPTS="-v"` supplies arbitrary command line options (in this case,
2474
+ `-v` to increase verbosity) to the `ansible-playbook` command that runs
2475
+ inside the container.
2417
2476
2418
2477
[[running-the-advanced-installation-individual-components]]
2419
2478
=== Running Individual Component Playbooks
0 commit comments