Skip to content

Commit 4abcf3d

Browse files
author
Gaurav Nelson
authored
Merge pull request #7223 from gaurav-nelson/enterprise-3.7-stage
[enterprise-3.7] advanced_install: installer as system container
2 parents 3fd81ac + 4da7805 commit 4abcf3d

File tree

1 file changed

+129
-70
lines changed

1 file changed

+129
-70
lines changed

install_config/install/advanced_install.adoc

Lines changed: 129 additions & 70 deletions
Original file line numberDiff line numberDiff line change
@@ -2225,115 +2225,174 @@ If for any reason the installation fails, before re-running the installer, see
22252225
xref:installer-known-issues[Known Issues] to check for any specific
22262226
instructions or workarounds.
22272227

2228-
[[running-the-advanced-installation-system-container]]
2228+
[[running-the-advanced-installation-containerized]]
22292229
=== Running the Containerized Installer
22302230

2231-
include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview]
2232-
22332231
The
22342232
ifdef::openshift-enterprise[]
22352233
*openshift3/ose-ansible*
22362234
endif::[]
22372235
ifdef::openshift-origin[]
22382236
*openshift/origin-ansible*
22392237
endif::[]
2240-
image is a containerized version of the {product-title} installer that runs as a
2241-
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container]. System containers are stored and run outside of the traditional
2242-
*docker* service. Functionally, using the containerized installer is the same as
2243-
using the traditional RPM-based installer, except it is running in a
2244-
containerized environment instead of directly on the host.
2238+
image is a containerized version of the {product-title} installer.
2239+
This installer image provides the same functionality as the RPM-based
2240+
installer, but it runs in a containerized environment that provides all
2241+
of its dependencies rather than being installed directly on the host.
2242+
The only requirement to use it is the ability to run a container.
2243+
2244+
[[running-the-advanced-installation-system-container]]
2245+
==== Running the Installer as a System Container
2246+
2247+
include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview]
22452248

2246-
. Use the Docker CLI to pull the image locally:
2249+
The installer image can be used as a
2250+
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container].
2251+
System containers are stored and run outside of the traditional *docker* service.
2252+
This enables running the installer image from one of the target hosts without
2253+
concern for the install restarting *docker* on the host.
2254+
2255+
. As the `root` user, use the Atomic CLI to run the installer as a run-once system container:
22472256
+
22482257
----
2258+
# atomic install --system \
2259+
--storage=ostree \
2260+
--set INVENTORY_FILE=/path/to/inventory \ <1>
22492261
ifdef::openshift-enterprise[]
2250-
$ docker pull registry.access.redhat.com/openshift3/ose-ansible:v3.7
2262+
registry.access.redhat.com/openshift3/ose-ansible:v3.7
22512263
endif::[]
22522264
ifdef::openshift-origin[]
2253-
$ docker pull docker.io/openshift/origin-ansible:v3.7
2265+
docker.io/openshift/origin-ansible:v3.7
22542266
endif::[]
22552267
----
2256-
2257-
. The installer system container must be stored in
2268+
<1> Specify the location on the local host for your inventory file.
2269+
+
2270+
This command initiates the cluster installation by using the inventory file specified and the `root` user's
2271+
SSH configuration. It logs the output on the terminal and also saves it in the *_/var/log/ansible.log_* file.
2272+
The first time this command is run, the image is imported into
22582273
link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree]
2259-
instead of defaulting to *docker* daemon storage. Use the Atomic CLI to import
2260-
the installer image from the local *docker* engine to OSTree storage:
2274+
storage (system containers use this rather than *docker* daemon storage).
2275+
On subsequent runs, it reuses the stored image.
22612276
+
2262-
----
2263-
$ atomic pull --storage ostree \
2264-
ifdef::openshift-enterprise[]
2265-
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
2266-
endif::[]
2267-
ifdef::openshift-origin[]
2268-
docker:docker.io/openshift/origin-ansible:v3.7
2269-
endif::[]
2270-
----
2277+
If for any reason the installation fails, before re-running the installer, see
2278+
xref:installer-known-issues[Known Issues] to check for any specific instructions
2279+
or workarounds.
2280+
2281+
[[running-the-advanced-installation-system-container-other-playbooks]]
2282+
==== Running Other Playbooks
2283+
2284+
You can use the `PLAYBOOK_FILE` environment variable to specify other playbooks
2285+
you want to run by using the containerized installer. The default value of the `PLAYBOOK_FILE` is
2286+
*_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the
2287+
main cluster installation playbook, but you can set it to the path of another
2288+
playbook inside the container.
2289+
2290+
For example, to run the
2291+
xref:configuring-cluster-pre-install-checks[pre-install checks] playbook before
2292+
installation, use the following command:
22712293

2272-
. Install the system container so it is set up as a systemd service:
2273-
+
22742294
----
2275-
$ atomic install --system \
2295+
# atomic install --system \
22762296
--storage=ostree \
2277-
--name=openshift-installer \//<1>
2278-
--set INVENTORY_FILE=/path/to/inventory \//<2>
2297+
--set INVENTORY_FILE=/path/to/inventory \
2298+
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ <1>
2299+
--set OPTS="-v" \ <2>
22792300
ifdef::openshift-enterprise[]
2280-
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
2301+
registry.access.redhat.com/openshift3/ose-ansible:v3.7
22812302
endif::[]
22822303
ifdef::openshift-origin[]
2283-
docker:docker.io/openshift/origin-ansible:v3.7
2304+
docker.io/openshift/origin-ansible:v3.7
22842305
endif::[]
22852306
----
2286-
<1> Sets the name for the systemd service.
2287-
<2> Specify the location for your inventory file on your local workstation.
2307+
<1> Set `PLAYBOOK_FILE` to the full path of the playbook starting at the
2308+
*_playbooks/_* directory. Playbooks are located in the same locations as with
2309+
the RPM-based installer.
2310+
<2> Set `OPTS` to add command line options to `ansible-playbook`.
22882311

2289-
. Use the `systemctl` command to start the installer service as you would any
2290-
other systemd service. This command initiates the cluster installation:
2291-
+
2292-
----
2293-
$ systemctl start openshift-installer
2294-
----
2295-
+
2296-
If for any reason the installation fails, before re-running the installer, see
2297-
xref:installer-known-issues[Known Issues] to check for any specific instructions
2298-
or workarounds.
2312+
[[running-the-advanced-installation-docker]]
2313+
==== Running the Installer as a Docker Container
22992314

2300-
. After the installation completes, you can uninstall the system container if you want. However, if you need to run the installer again to run any other playbooks later, you would have to follow this procedure again.
2301-
+
2302-
To uninstall the system container:
2303-
+
2304-
----
2305-
$ atomic uninstall openshift-installer
2306-
----
2315+
The installer image can also run as a *docker* container anywhere that *docker* can run.
23072316

2308-
[[running-the-advanced-installation-system-container-other-playbooks]]
2309-
==== Running Other Playbooks
2317+
[WARNING]
2318+
====
2319+
This method must not be used to run the installer on one of the hosts being configured,
2320+
as the install may restart *docker* on the host, disrupting the installer container execution.
2321+
====
2322+
2323+
[NOTE]
2324+
====
2325+
Although this method and the system container method above use the same image, they
2326+
run with different entry points and contexts, so runtime parameters are not the same.
2327+
====
23102328

2311-
After you have completed the cluster installation, if you want to later run any
2312-
other playbooks using the containerized installer (for example, cluster upgrade
2313-
playbooks), you can use the `PLAYBOOK_FILE` environment variable. The default
2314-
value is `playbooks/byo/config.yml`, which is the main cluster installation
2315-
playbook, but you can set it to the path of another playbook inside the
2316-
container.
2329+
At a minimum, when running the installer as a *docker* container you must provide:
23172330

2318-
For example:
2331+
* SSH key(s), so that Ansible can reach your hosts.
2332+
* An Ansible inventory file.
2333+
* The location of the Ansible playbook to run against that inventory.
2334+
2335+
Here is an example of how to run an install via *docker*.
2336+
Note that this must be run by a non-`root` user with access to *docker*.
23192337

23202338
----
2321-
$ atomic install --system \
2322-
--storage=ostree \
2323-
--name=openshift-installer \
2324-
--set INVENTORY_FILE=/etc/ansible/hosts \
2325-
--set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade.yml \//<1>
2339+
$ docker run -t -u `id -u` \ <1>
2340+
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2>
2341+
-v $HOME/ansible/hosts:/tmp/inventory:Z \ <3>
2342+
-e INVENTORY_FILE=/tmp/inventory \ <3>
2343+
-e PLAYBOOK_FILE=playbooks/byo/config.yml \ <4>
2344+
-e OPTS="-v" \ <5>
23262345
ifdef::openshift-enterprise[]
2327-
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
2346+
registry.access.redhat.com/openshift3/ose-ansible:v3.7
23282347
endif::[]
23292348
ifdef::openshift-origin[]
2330-
docker:docker.io/openshift/origin-ansible:v3.7
2349+
docker.io/openshift/origin-ansible:v3.7
23312350
endif::[]
23322351
----
2333-
<1> Set `PLAYBOOK_FILE` to the relative path of the playbook starting at the
2334-
*_playbooks/_* directory. Playbooks mentioned elsewhere in {product-title}
2335-
documentation assume use of the RPM-based installer, so use this relative path
2336-
instead when using the containerized installer.
2352+
<1> `-u `id -u`` makes the container run with the same UID as the current
2353+
user, which allows that user to use the SSH key inside the container (SSH
2354+
private keys are expected to be readable only by their owner).
2355+
<2> `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
2356+
SSH key (`$HOME/.ssh/id_rsa`) under the container user's `$HOME/.ssh`
2357+
(*_/opt/app-root/src_* is the `$HOME` of the user in the container). If
2358+
you mount the SSH key into a non-standard location you can add an
2359+
environment variable with `-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point`
2360+
or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in
2361+
the inventory to point Ansible at it.
2362+
+
2363+
Note that the SSH key is mounted with the `:Z` flag. This is
2364+
required so that the container can read the SSH key under
2365+
its restricted SELinux context. This also means that your
2366+
original SSH key file will be re-labeled to something like
2367+
`system_u:object_r:container_file_t:s0:c113,c247`. For more details
2368+
about `:Z` please check the `docker-run(1)` man page. Keep this in mind
2369+
when providing these volume mount specifications because this might
2370+
have unexpected consequences: for example, if you mount (and therefore
2371+
re-label) your whole `$HOME/.ssh` directory it will block the host's
2372+
*sshd* from accessing your public keys to login. For this reason you
2373+
may want to use a separate copy of the SSH key (or directory), so that
2374+
the original file labels remain untouched.
2375+
<3> `-v $HOME/ansible/hosts:/tmp/inventory:Z` and `-e INVENTORY_FILE=/tmp/inventory`
2376+
mount a static Ansible inventory file into the container as
2377+
*_/tmp/inventory_* and set the corresponding environment variable to
2378+
point at it. As with the SSH key, the inventory file SELinux labels may
2379+
need to be relabeled by using the `:Z` flag to allow reading in the container,
2380+
depending on the existing label (for files in a user `$HOME` directory
2381+
this is likely to be needed). So again you may prefer to copy the
2382+
inventory to a dedicated location before mounting it.
2383+
+
2384+
The inventory file can also be downloaded from a web server if you specify
2385+
the `INVENTORY_URL` environment variable, or generated dynamically using
2386+
`DYNAMIC_SCRIPT_URL` to specify an executable script that provides a
2387+
dynamic inventory.
2388+
<4> `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
2389+
to run (in this example, the BYO installer) as a relative path from the
2390+
top level directory of *openshift-ansible* content. The full path from the
2391+
RPM can also be used, as well as the path to any other playbook file in
2392+
the container.
2393+
<5> `-e OPTS="-v"` supplies arbitrary command line options (in this case,
2394+
`-v` to increase verbosity) to the `ansible-playbook` command that runs
2395+
inside the container.
23372396

23382397
[[running-the-advanced-installation-individual-components]]
23392398
=== Running Individual Component Playbooks

0 commit comments

Comments
 (0)