Skip to content

advanced_install: installer as system container #5258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
199 changes: 129 additions & 70 deletions install_config/install/advanced_install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2296,115 +2296,174 @@ If for any reason the installation fails, before re-running the installer, see
xref:installer-known-issues[Known Issues] to check for any specific
instructions or workarounds.

[[running-the-advanced-installation-system-container]]
[[running-the-advanced-installation-containerized]]
=== Running the Containerized Installer

include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview]

The
ifdef::openshift-enterprise[]
*openshift3/ose-ansible*
endif::[]
ifdef::openshift-origin[]
*openshift/origin-ansible*
endif::[]
image is a containerized version of the {product-title} installer that runs as a
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container]. System containers are stored and run outside of the traditional
*docker* service. Functionally, using the containerized installer is the same as
using the traditional RPM-based installer, except it is running in a
containerized environment instead of directly on the host.
image is a containerized version of the {product-title} installer.
This installer image provides the same functionality as the RPM-based
installer, but it runs in a containerized environment that provides all
of its dependencies rather than being installed directly on the host.
The only requirement to use it is the ability to run a container.

[[running-the-advanced-installation-system-container]]
==== Running the Installer as a System Container

include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview]

. Use the Docker CLI to pull the image locally:
The installer image can be used as a
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container].
System containers are stored and run outside of the traditional *docker* service.
This enables running the installer image from one of the target hosts without
concern for the install restarting *docker* on the host.

. As the `root` user, use the Atomic CLI to run the installer as a run-once system container:
+
----
# atomic install --system \
--storage=ostree \
--set INVENTORY_FILE=/path/to/inventory \ <1>
ifdef::openshift-enterprise[]
$ docker pull registry.access.redhat.com/openshift3/ose-ansible:v3.7
registry.access.redhat.com/openshift3/ose-ansible:v3.7
endif::[]
ifdef::openshift-origin[]
$ docker pull docker.io/openshift/origin-ansible:v3.7
docker.io/openshift/origin-ansible:v3.7
endif::[]
----

. The installer system container must be stored in
<1> Specify the location on the local host for your inventory file.
+
This command initiates the cluster installation by using the inventory file specified and the `root` user's
SSH configuration. It logs the output on the terminal and also saves it in the *_/var/log/ansible.log_* file.
The first time this command is run, the image is imported into
link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree]
instead of defaulting to *docker* daemon storage. Use the Atomic CLI to import
the installer image from the local *docker* engine to OSTree storage:
storage (system containers use this rather than *docker* daemon storage).
On subsequent runs, it reuses the stored image.
+
----
$ atomic pull --storage ostree \
ifdef::openshift-enterprise[]
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
endif::[]
ifdef::openshift-origin[]
docker:docker.io/openshift/origin-ansible:v3.7
endif::[]
----
If for any reason the installation fails, before re-running the installer, see
xref:installer-known-issues[Known Issues] to check for any specific instructions
or workarounds.

[[running-the-advanced-installation-system-container-other-playbooks]]
==== Running Other Playbooks

You can use the `PLAYBOOK_FILE` environment variable to specify other playbooks
you want to run by using the containerized installer. The default value of the `PLAYBOOK_FILE` is
*_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the
main cluster installation playbook, but you can set it to the path of another
playbook inside the container.

For example, to run the
xref:configuring-cluster-pre-install-checks[pre-install checks] playbook before
installation, use the following command:

. Install the system container so it is set up as a systemd service:
+
----
$ atomic install --system \
# atomic install --system \
--storage=ostree \
--name=openshift-installer \//<1>
--set INVENTORY_FILE=/path/to/inventory \//<2>
--set INVENTORY_FILE=/path/to/inventory \
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ <1>
--set OPTS="-v" \ <2>
ifdef::openshift-enterprise[]
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
registry.access.redhat.com/openshift3/ose-ansible:v3.7
endif::[]
ifdef::openshift-origin[]
docker:docker.io/openshift/origin-ansible:v3.7
docker.io/openshift/origin-ansible:v3.7
endif::[]
----
<1> Sets the name for the systemd service.
<2> Specify the location for your inventory file on your local workstation.
<1> Set `PLAYBOOK_FILE` to the full path of the playbook starting at the
*_playbooks/_* directory. Playbooks are located in the same locations as with
the RPM-based installer.
<2> Set `OPTS` to add command line options to `ansible-playbook`.

. Use the `systemctl` command to start the installer service as you would any
other systemd service. This command initiates the cluster installation:
+
----
$ systemctl start openshift-installer
----
+
If for any reason the installation fails, before re-running the installer, see
xref:installer-known-issues[Known Issues] to check for any specific instructions
or workarounds.
[[running-the-advanced-installation-docker]]
==== Running the Installer as a Docker Container

. After the installation completes, you can uninstall the system container if you want. However, if you need to run the installer again to run any other playbooks later, you would have to follow this procedure again.
+
To uninstall the system container:
+
----
$ atomic uninstall openshift-installer
----
The installer image can also run as a *docker* container anywhere that *docker* can run.

[[running-the-advanced-installation-system-container-other-playbooks]]
==== Running Other Playbooks
[WARNING]
====
This method must not be used to run the installer on one of the hosts being configured,
as the install may restart *docker* on the host, disrupting the installer container execution.
====

[NOTE]
====
Although this method and the system container method above use the same image, they
run with different entry points and contexts, so runtime parameters are not the same.
====

After you have completed the cluster installation, if you want to later run any
other playbooks using the containerized installer (for example, cluster upgrade
playbooks), you can use the `PLAYBOOK_FILE` environment variable. The default
value is `playbooks/byo/config.yml`, which is the main cluster installation
playbook, but you can set it to the path of another playbook inside the
container.
At a minimum, when running the installer as a *docker* container you must provide:

For example:
* SSH key(s), so that Ansible can reach your hosts.
* An Ansible inventory file.
* The location of the Ansible playbook to run against that inventory.

Here is an example of how to run an install via *docker*.
Note that this must be run by a non-`root` user with access to *docker*.

----
$ atomic install --system \
--storage=ostree \
--name=openshift-installer \
--set INVENTORY_FILE=/etc/ansible/hosts \
--set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade.yml \//<1>
$ docker run -t -u `id -u` \ <1>
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2>
-v $HOME/ansible/hosts:/tmp/inventory:Z \ <3>
-e INVENTORY_FILE=/tmp/inventory \ <3>
-e PLAYBOOK_FILE=playbooks/byo/config.yml \ <4>
-e OPTS="-v" \ <5>
ifdef::openshift-enterprise[]
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.7
registry.access.redhat.com/openshift3/ose-ansible:v3.7
endif::[]
ifdef::openshift-origin[]
docker:docker.io/openshift/origin-ansible:v3.7
docker.io/openshift/origin-ansible:v3.7
endif::[]
----
<1> Set `PLAYBOOK_FILE` to the relative path of the playbook starting at the
*_playbooks/_* directory. Playbooks mentioned elsewhere in {product-title}
documentation assume use of the RPM-based installer, so use this relative path
instead when using the containerized installer.
<1> `-u `id -u`` makes the container run with the same UID as the current
user, which allows that user to use the SSH key inside the container (SSH
private keys are expected to be readable only by their owner).
<2> `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
SSH key (`$HOME/.ssh/id_rsa`) under the container user's `$HOME/.ssh`
(*_/opt/app-root/src_* is the `$HOME` of the user in the container). If
you mount the SSH key into a non-standard location you can add an
environment variable with `-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point`
or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in
the inventory to point Ansible at it.
+
Note that the SSH key is mounted with the `:Z` flag. This is
required so that the container can read the SSH key under
its restricted SELinux context. This also means that your
original SSH key file will be re-labeled to something like
`system_u:object_r:container_file_t:s0:c113,c247`. For more details
about `:Z` please check the `docker-run(1)` man page. Keep this in mind
when providing these volume mount specifications because this might
have unexpected consequences: for example, if you mount (and therefore
re-label) your whole `$HOME/.ssh` directory it will block the host's
*sshd* from accessing your public keys to login. For this reason you
may want to use a separate copy of the SSH key (or directory), so that
the original file labels remain untouched.
<3> `-v $HOME/ansible/hosts:/tmp/inventory:Z` and `-e INVENTORY_FILE=/tmp/inventory`
mount a static Ansible inventory file into the container as
*_/tmp/inventory_* and set the corresponding environment variable to
point at it. As with the SSH key, the inventory file SELinux labels may
need to be relabeled by using the `:Z` flag to allow reading in the container,
depending on the existing label (for files in a user `$HOME` directory
this is likely to be needed). So again you may prefer to copy the
inventory to a dedicated location before mounting it.
+
The inventory file can also be downloaded from a web server if you specify
the `INVENTORY_URL` environment variable, or generated dynamically using
`DYNAMIC_SCRIPT_URL` to specify an executable script that provides a
dynamic inventory.
<4> `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
to run (in this example, the BYO installer) as a relative path from the
top level directory of *openshift-ansible* content. The full path from the
RPM can also be used, as well as the path to any other playbook file in
the container.
<5> `-e OPTS="-v"` supplies arbitrary command line options (in this case,
`-v` to increase verbosity) to the `ansible-playbook` command that runs
inside the container.

[[running-the-advanced-installation-individual-components]]
=== Running Individual Component Playbooks
Expand Down