Skip to content

Commit 7c24c89

Browse files
committed
advanced_install: installer image
Update, simplify, and also note some things that were not apparent about running as a system container. Rearrange containerized install material to enable discussion of running it as a Docker container.
1 parent 98e8c28 commit 7c24c89

File tree

1 file changed

+136
-70
lines changed

1 file changed

+136
-70
lines changed

install_config/install/advanced_install.adoc

Lines changed: 136 additions & 70 deletions
Original file line numberDiff line numberDiff line change
@@ -2296,115 +2296,181 @@ If for any reason the installation fails, before re-running the installer, see
22962296
xref:installer-known-issues[Known Issues] to check for any specific
22972297
instructions or workarounds.
22982298

2299-
[[running-the-advanced-installation-system-container]]
2299+
[[running-the-advanced-installation-containerized]]
23002300
=== Running the Containerized Installer
23012301

2302-
include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview]
2303-
23042302
The
23052303
ifdef::openshift-enterprise[]
23062304
*openshift3/ose-ansible*
23072305
endif::[]
23082306
ifdef::openshift-origin[]
23092307
*openshift/origin-ansible*
23102308
endif::[]
2311-
image is a containerized version of the {product-title} installer that runs as a
2312-
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container]. System containers are stored and run outside of the traditional
2313-
*docker* service. Functionally, using the containerized installer is the same as
2314-
using the traditional RPM-based installer, except it is running in a
2315-
containerized environment instead of directly on the host.
2309+
image is a containerized version of the {product-title} installer.
2310+
This installer image provides the same functionality as the RPM-based
2311+
installer, but it runs in a containerized environment that provides all
2312+
of its dependencies rather than being installed directly on the host.
2313+
The only requirement to use it is the ability to run a container.
23162314

2317-
. Use the Docker CLI to pull the image locally:
2318-
+
2319-
----
2320-
ifdef::openshift-enterprise[]
2321-
$ docker pull registry.access.redhat.com/openshift3/ose-ansible:v3.7
2322-
endif::[]
2323-
ifdef::openshift-origin[]
2324-
$ docker pull docker.io/openshift/origin-ansible:v3.7
2325-
endif::[]
2326-
----
2315+
[[running-the-advanced-installation-system-container]]
2316+
==== Running the Installer as a System Container
23272317

2328-
. The installer system container must be stored in
2329-
link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree]
2330-
instead of defaulting to *docker* daemon storage. Use the Atomic CLI to import
2331-
the installer image from the local *docker* engine to OSTree storage:
2332-
+
2333-
----
2334-
$ atomic pull --storage ostree \
2335-
ifdef::openshift-enterprise[]
2336-
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
2337-
endif::[]
2338-
ifdef::openshift-origin[]
2339-
docker:docker.io/openshift/origin-ansible:v3.7
2340-
endif::[]
2341-
----
2318+
include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview]
2319+
2320+
The installer image can be used as a
2321+
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container].
2322+
System containers are stored and run outside of the traditional *docker* service.
2323+
This enables running the installer image from one of the target hosts without
2324+
concern for the install restarting *docker* on the host.
23422325

2343-
. Install the system container so it is set up as a systemd service:
2326+
. As the `root` user, use the Atomic CLI to run the installer as a run-once system container:
23442327
+
23452328
----
2346-
$ atomic install --system \
2329+
# atomic install --system \
23472330
--storage=ostree \
2348-
--name=openshift-installer \//<1>
2349-
--set INVENTORY_FILE=/path/to/inventory \//<2>
2331+
--set INVENTORY_FILE=/path/to/inventory \//<1>
23502332
ifdef::openshift-enterprise[]
2351-
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
2333+
registry.access.redhat.com/openshift3/ose-ansible:v3.7
23522334
endif::[]
23532335
ifdef::openshift-origin[]
2354-
docker:docker.io/openshift/origin-ansible:v3.7
2336+
docker.io/openshift/origin-ansible:v3.7
23552337
endif::[]
23562338
----
2357-
<1> Sets the name for the systemd service.
2358-
<2> Specify the location for your inventory file on your local workstation.
2359-
2360-
. Use the `systemctl` command to start the installer service as you would any
2361-
other systemd service. This command initiates the cluster installation:
2339+
<1> Specify the location on the local host for your inventory file.
23622340
+
2363-
----
2364-
$ systemctl start openshift-installer
2365-
----
2341+
This command initiates the cluster installation using the inventory file specified and the `root` user's
2342+
SSH configuration, logging to the terminal as well as *_/var/log/ansible.log_*. The first time this is executed, the image is imported into
2343+
link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree]
2344+
storage (system containers use this instead of *docker* daemon storage).
2345+
Later executions re-use the stored image.
23662346
+
23672347
If for any reason the installation fails, before re-running the installer, see
23682348
xref:installer-known-issues[Known Issues] to check for any specific instructions
23692349
or workarounds.
23702350

2371-
. After the installation completes, you can uninstall the system container if you want. However, if you need to run the installer again to run any other playbooks later, you would have to follow this procedure again.
2372-
+
2373-
To uninstall the system container:
2374-
+
2375-
----
2376-
$ atomic uninstall openshift-installer
2377-
----
2378-
23792351
[[running-the-advanced-installation-system-container-other-playbooks]]
23802352
==== Running Other Playbooks
23812353

2382-
After you have completed the cluster installation, if you want to later run any
2383-
other playbooks using the containerized installer (for example, cluster upgrade
2384-
playbooks), you can use the `PLAYBOOK_FILE` environment variable. The default
2385-
value is `playbooks/byo/config.yml`, which is the main cluster installation
2386-
playbook, but you can set it to the path of another playbook inside the
2387-
container.
2354+
To run any other playbooks (for example, to run the
2355+
xref:configuring-cluster-pre-install-checks[pre-install checks]
2356+
before proceeding to install) using the containerized installer, you can
2357+
specify a playbook with the `PLAYBOOK_FILE` environment variable. The
2358+
default value is *_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the main cluster
2359+
installation playbook, but you can set it to the path of another playbook
2360+
inside the container.
23882361

23892362
For example:
23902363

23912364
----
2392-
$ atomic install --system \
2365+
# atomic install --system \
23932366
--storage=ostree \
2394-
--name=openshift-installer \
2395-
--set INVENTORY_FILE=/etc/ansible/hosts \
2396-
--set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade.yml \//<1>
2367+
--set INVENTORY_FILE=/path/to/inventory \
2368+
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \//<1>
2369+
--set OPTS="-v" \//<2>
23972370
ifdef::openshift-enterprise[]
2398-
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
2371+
registry.access.redhat.com/openshift3/ose-ansible:v3.7
23992372
endif::[]
24002373
ifdef::openshift-origin[]
2401-
docker:docker.io/openshift/origin-ansible:v3.7
2374+
docker.io/openshift/origin-ansible:v3.7
24022375
endif::[]
24032376
----
2404-
<1> Set `PLAYBOOK_FILE` to the relative path of the playbook starting at the
2405-
*_playbooks/_* directory. Playbooks mentioned elsewhere in {product-title}
2406-
documentation assume use of the RPM-based installer, so use this relative path
2407-
instead when using the containerized installer.
2377+
<1> Set `PLAYBOOK_FILE` to the full path of the playbook starting at the
2378+
*_playbooks/_* directory. Playbooks are located in the same locations as with
2379+
the RPM-based installer.
2380+
<2> Set `OPTS` to add command line options to `ansible-playbook`.
2381+
2382+
[[running-the-advanced-installation-docker]]
2383+
==== Running the Installer as a Docker Container
2384+
2385+
The installer image can also run as a *docker* container anywhere that *docker* can run.
2386+
2387+
[WARNING]
2388+
====
2389+
This method should not be used to run the installer on one of the hosts being configured,
2390+
as the install typically restarts *docker* on the host, which would disrupt the installer
2391+
container's own execution.
2392+
====
2393+
2394+
[NOTE]
2395+
====
2396+
Although this method and the system container method above use the same image, they
2397+
run with different entry points and contexts, so runtime parameters are not the same.
2398+
====
2399+
2400+
At a minimum, when running the installer as a *docker* container you must provide:
2401+
2402+
* SSH key(s) so that Ansible can reach your hosts.
2403+
* An Ansible inventory file.
2404+
* The location of an Ansible playbook to run against that inventory.
2405+
2406+
Here is an example of how to run an install via *docker*.
2407+
Note that this must be run by a non-`root` user with access to *docker*.
2408+
2409+
----
2410+
$ docker run -t -u `id -u` \
2411+
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \
2412+
-v $HOME/ansible/hosts:/tmp/inventory:Z \
2413+
-e INVENTORY_FILE=/tmp/inventory \
2414+
-e PLAYBOOK_FILE=playbooks/byo/config.yml \
2415+
-e OPTS="-v" \
2416+
ifdef::openshift-enterprise[]
2417+
registry.access.redhat.com/openshift3/ose-ansible:v3.7
2418+
endif::[]
2419+
ifdef::openshift-origin[]
2420+
docker.io/openshift/origin-ansible:v3.7
2421+
endif::[]
2422+
----
2423+
2424+
The following is a detailed explanation of the options used in the command above.
2425+
2426+
* `-u `id -u`` makes the container run with the same UID as the current
2427+
user, which allows that user to use the SSH key inside the container (SSH
2428+
private keys are expected to be readable only by their owner).
2429+
2430+
* `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
2431+
SSH key (`$HOME/.ssh/id_rsa`) under the container user's `$HOME/.ssh`
2432+
(*_/opt/app-root/src_* is the `$HOME` of the user in the container). If
2433+
you mount the SSH key into a non-standard location you can add an
2434+
environment variable with `-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point`
2435+
or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in
2436+
the inventory to point Ansible at it.
2437+
+
2438+
Note that the SSH key is mounted with the `:Z` flag: this is
2439+
required so that the container can read the SSH key under
2440+
its restricted SELinux context; this also means that your
2441+
original SSH key file will be re-labeled to something like
2442+
`system_u:object_r:container_file_t:s0:c113,c247`. For more details
2443+
about `:Z` please check the `docker-run(1)` man page. Keep this in mind
2444+
when providing these volume mount specifications because this could
2445+
have unexpected consequences: for example, if you mount (and therefore
2446+
re-label) your whole `$HOME/.ssh` directory it will block the host's
2447+
*sshd* from accessing your public keys to login. For this reason you
2448+
may want to use a separate copy of the SSH key (or directory), so that
2449+
the original file labels remain untouched.
2450+
2451+
* `-v $HOME/ansible/hosts:/tmp/inventory:Z` and `-e INVENTORY_FILE=/tmp/inventory`
2452+
mount a static Ansible inventory file into the container as
2453+
*_/tmp/inventory_* and set the corresponding environment variable to
2454+
point at it. As with the SSH key, the inventory file SELinux labels may
2455+
need to be relabeled via the `:Z` flag to allow reading in the container,
2456+
depending on the existing label (for files in a user `$HOME` directory
2457+
this is likely to be needed). So again you may prefer to copy the
2458+
inventory to a dedicated location before mounting it.
2459+
+
2460+
The inventory file can also be downloaded from a web server if you specify
2461+
the `INVENTORY_URL` environment variable, or generated dynamically using
2462+
`DYNAMIC_SCRIPT_URL` to specify an executable script that provides a
2463+
dynamic inventory.
2464+
2465+
* `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
2466+
to run (in this example, the BYO installer) as a relative path from the
2467+
top level directory of *openshift-ansible* content. The full path from the
2468+
RPM can also be used, as well as the path to any other playbook file in
2469+
the container.
2470+
2471+
* `-e OPTS="-v"` supplies arbitrary command line options (in this case,
2472+
`-v` to increase verbosity) to the `ansible-playbook` command that runs
2473+
inside the container.
24082474

24092475
[[running-the-advanced-installation-individual-components]]
24102476
=== Running Individual Component Playbooks

0 commit comments

Comments
 (0)