-
Notifications
You must be signed in to change notification settings - Fork 1.8k
advanced_install: installer as system container #5258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
advanced_install: installer as system container #5258
Conversation
dfae5e5
to
9bbb592
Compare
*docker* service. The installer image provides the same functionality | ||
as the traditional RPM-based installer, but it runs in a containerized | ||
environment that provides all of its dependencies rather than being | ||
installed directly on the host. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only real advantage -- don't have to install a bunch of RPMs on your host. So I think it's worth mentioning.
+ | ||
---- | ||
$ atomic pull --storage ostree \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It turns out this whole step is unnecessary as atomic install
does it for you.
@@ -2216,22 +2206,35 @@ ifdef::openshift-origin[] | |||
endif::[] | |||
---- | |||
<1> Sets the name for the systemd service. | |||
<2> Specify the location for your inventory file on your local workstation. | |||
<2> Specify the location for your local inventory file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only place we referred to local workstation... this can run anywhere that docker and systemd run, not that likely to be a workstation actually.
|
||
. Use the `systemctl` command to start the installer service as you would any | ||
other systemd service. This command initiates the cluster installation: | ||
other systemd service. This command initiates the cluster installation using | ||
the inventory file specified above and the root user's SSH configuration: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is one thing that was not apparent before... ssh config comes from root for this use case (in the docker use case you have to mount it in).
link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree] | ||
instead of defaulting to *docker* daemon storage. Use the Atomic CLI to import | ||
the installer image from the local *docker* engine to OSTree storage: | ||
instead of *docker* daemon storage. As the root user, use the Atomic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
atomic install and systemctl and journalctl are generally things you have to run as root. Is it customary to use the root prompt in docs?
+ | ||
---- | ||
$ systemctl start openshift-installer | ||
---- | ||
+ | ||
There is no output from this command while the installer service is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One potentially confusing thing the first time you run this.
playbook, but you can set it to the path of another playbook inside the | ||
container. | ||
To run any other playbooks (for example, to run the | ||
xref:configuring-cluster-pre-install-checks[pre-install checks] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pre-install checks seemed a more relevant subject for this topic than the upgrade playbooks. Plus I'd really like to let people know they can run checks from this container.
--set INVENTORY_FILE=/etc/ansible/hosts \ | ||
--set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml \//<1> | ||
--set INVENTORY_FILE=/path/to/inventory \ | ||
--set PLAYBOOK_FILE=playbooks/byo/openshift-checks/pre-install.yml \//<1> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually I have to update this - currently it won't work with a relative path.
9bbb592
to
495c0e4
Compare
--set INVENTORY_FILE=/etc/ansible/hosts \ | ||
--set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml \//<1> | ||
--set INVENTORY_FILE=/path/to/inventory \ | ||
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \//<1> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As currently implemented, you actually need the full path for the playbook. This will change "soon" with updates to the image and we'll be able to go back to relative paths (the full path should always work though).
@openshift/team-documentation these are changes for 3.6 and origin ... hopefully by the time 3.7 rolls around we will have an updated atomic with a better workflow and no longer tech preview. EDIT: actually the better workflow is here already :) Updated... |
495c0e4
to
7f9d6db
Compare
--storage=ostree \ | ||
--name=openshift-installer \//<1> | ||
--set INVENTORY_FILE=/path/to/inventory \//<2> | ||
ifdef::openshift-enterprise[] | ||
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 | ||
registry.access.redhat.com/openshift3/ose-ansible:v3.6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ashcrow @giuseppe check me on this -- it looks like the latest version of atomic on RHEL now has more of the functionality you wanted here.
Previously, it needed to pull through docker, and it would import into ostree every time, and created a regular systemd service (not one-time, and no output, just journald, and service definition was retained after running).
That is still the behavior AFAICS if I specify the image with the docker:
prefix, however if I leave that off, then it directly imports the image to ostree (no docker even running), creates a one-time service, runs it, and uninstalls it, which I think is the behavior you were wanting to get to.
Just wanted you to check if I'm missing anything here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[...]if I leave that off, then it directly imports the image to ostree (no docker even running), creates a one-time service, runs it, and uninstalls it, which I think is the behavior you were wanting to get to.
That is the atomic.run
once
feature that is much cleaner than the original one. Looks like RHEL 7.4 has atomic 1.18.1-3 which has the run once ability. I'm not sure if the docker import is still needed or not with that version. @giuseppe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like with that feature, it doesn't even use the service name provided. I don't think it ever creates a systemd service per se, doesn't seem to send anything to journald. So I'll take out all the discussion of systemd service, which should make this even simpler :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated per things I discovered about the atomic install
command just now.
*docker* service. The installer image provides the same functionality | ||
as the traditional RPM-based installer, but it runs in a containerized | ||
environment that provides all of its dependencies rather than being | ||
installed directly on the host. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the main advantage -- don't have to install a bunch of RPMs on your host. So I think it's worth mentioning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed
+ | ||
---- | ||
$ atomic install --system \ | ||
# atomic install --system \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
atomic install I think you have to run as root. Is it customary to use the root prompt in docs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct. It must be run as root.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it must run as root. "atomic install --system" would fail anyway if not running as root.
$ systemctl start openshift-installer | ||
---- | ||
This command initiates the cluster installation using the inventory file specified and the root user's | ||
SSH configuration, logging to the terminal as well as *_/var/log/ansible.log_*. The first time this is executed, the image will be imported into |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Things that were not apparent before... ssh config comes from root for this use case, and where to find logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
*docker* service. The installer image provides the same functionality | ||
as the traditional RPM-based installer, but it runs in a containerized | ||
environment that provides all of its dependencies rather than being | ||
installed directly on the host. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed
+ | ||
---- | ||
$ atomic install --system \ | ||
# atomic install --system \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct. It must be run as root.
--storage=ostree \ | ||
--name=openshift-installer \//<1> | ||
--set INVENTORY_FILE=/path/to/inventory \//<2> | ||
ifdef::openshift-enterprise[] | ||
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 | ||
registry.access.redhat.com/openshift3/ose-ansible:v3.6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[...]if I leave that off, then it directly imports the image to ostree (no docker even running), creates a one-time service, runs it, and uninstalls it, which I think is the behavior you were wanting to get to.
That is the atomic.run
once
feature that is much cleaner than the original one. Looks like RHEL 7.4 has atomic 1.18.1-3 which has the run once ability. I'm not sure if the docker import is still needed or not with that version. @giuseppe?
@sosiouxme thank you for working on this! |
7f9d6db
to
ace08df
Compare
I added a commit to also discuss running the installer image via Docker. This can be worked on separately if preferred, though they are closely related. |
@openshift/team-documentation can this PR get some love? |
@sosiouxme is this for 3.7? |
@vikram-redhat it's not specific to 3.7, it's been relevant since it was written two months ago. 3.7 does not block on getting this in, if that's what you're asking. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This installer image provides the same functionality as the RPM-based | ||
installer, but it runs in a containerized environment that provides all | ||
of its dependencies rather than being installed directly on the host. | ||
So, the only requirement to use it is the ability to run a container. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd remove the 'So,'
@sosiouxme could you please rebase this PR? It looks like this change didn't get it, and the documentation still reports the old way of using the system container installer. |
👍 we've had some confusion over usage and I think this PR clears things up. |
03aea35
to
c46d96d
Compare
Update, simplify, and also note some things that were not apparent about running as a system container. Rearrange containerized install material to enable discussion of running it as a Docker container.
c46d96d
to
7c24c89
Compare
@giuseppe @ashcrow sure thing guys. |
|
||
* SSH key(s) so that Ansible can reach your hosts. | ||
* An Ansible inventory file. | ||
* The location of an Ansible playbook to run against that inventory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/an/the
Note that this must be run by a non-`root` user with access to *docker*. | ||
|
||
---- | ||
$ docker run -t -u `id -u` \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use callous for explaining this code instead of the list. For example:
$ docker run -t -u `id -u` \ <1>
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2>
...and so on.
and after the code block use the callouts as:
<1> `-u `id -u`` makes the container run with the same UID as the current
user, which allows that user to use the SSH key inside the container (SSH
private keys are expected to be readable only by their owner).
<2> `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
SSH key (`$HOME/.ssh/id_rsa`) under the container user's `$HOME/.ssh`
(*_/opt/app-root/src_* is the `$HOME` of the user in the container). If
you mount the SSH key into a non-standard location you can add an
environment variable with `-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point`
or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in
the inventory to point Ansible at it.
... and so on.
The text portions used above are examples only please do not use it as it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This results in some really huge callouts, but I guess that's fine. I think originally there wasn't as clear a correspondence between the lines of command and the list explanation. Now the only real mismatch is that the third callout is about two lines, but I guess that's fine too.
or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in | ||
the inventory to point Ansible at it. | ||
+ | ||
Note that the SSH key is mounted with the `:Z` flag: this is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/flag:/flag**.**
+ | ||
Note that the SSH key is mounted with the `:Z` flag: this is | ||
required so that the container can read the SSH key under | ||
its restricted SELinux context; this also means that your |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/SELinux context; this also/SELinux context**.** This also
original SSH key file will be re-labeled to something like | ||
`system_u:object_r:container_file_t:s0:c113,c247`. For more details | ||
about `:Z` please check the `docker-run(1)` man page. Keep this in mind | ||
when providing these volume mount specifications because this could |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/this could/this might
mount a static Ansible inventory file into the container as | ||
*_/tmp/inventory_* and set the corresponding environment variable to | ||
point at it. As with the SSH key, the inventory file SELinux labels may | ||
need to be relabeled via the `:Z` flag to allow reading in the container, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/via/by using
@sosiouxme Thank you for the PR. I have suggested few updates, not big ones but multiple small ones. Then I can merge this one after QE checks. |
cc @wkshi |
@gaurav-nelson @sosiouxme @wkshi thanks guys for working on this! |
Thanks, I'll fix those up today. |
@gaurav-nelson thanks for the review; I think I've addressed everything except #5258 (review) where I have a lingering question. |
Thank you @sosiouxme for the updates, I am waiting for QE review before merging this. After that's done, I will do a follow up PR and then we can work on better explanation for your question. Thanks again! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.(QE side)
@gaurav-nelson looks like we got QE approval 🎆 🎆 |
👏 |
@gaurav-nelson I'm getting a lot of conflicts trying to pick something else from master to 3.9-stage, and I think it's because this PR should have gotten picked to 3.9 when it was picked to 3.6 and 3.7. I'm gonna try doing that. |
Simplify slightly and also note some things that were not apparent.
I would like to go on to describe using the container from docker, but I thought it would be good to fix up what's there first.