Skip to content

advanced_install: installer as system container #5258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

sosiouxme
Copy link
Member

Simplify slightly and also note some things that were not apparent.

I would like to go on to describe using the container from docker, but I thought it would be good to fix up what's there first.

@sosiouxme sosiouxme force-pushed the 20170913-containerized-installer branch from dfae5e5 to 9bbb592 Compare September 13, 2017 21:40
*docker* service. The installer image provides the same functionality
as the traditional RPM-based installer, but it runs in a containerized
environment that provides all of its dependencies rather than being
installed directly on the host.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only real advantage -- don't have to install a bunch of RPMs on your host. So I think it's worth mentioning.

+
----
$ atomic pull --storage ostree \
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It turns out this whole step is unnecessary as atomic install does it for you.

@@ -2216,22 +2206,35 @@ ifdef::openshift-origin[]
endif::[]
----
<1> Sets the name for the systemd service.
<2> Specify the location for your inventory file on your local workstation.
<2> Specify the location for your local inventory file.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only place we referred to local workstation... this can run anywhere that docker and systemd run, not that likely to be a workstation actually.


. Use the `systemctl` command to start the installer service as you would any
other systemd service. This command initiates the cluster installation:
other systemd service. This command initiates the cluster installation using
the inventory file specified above and the root user's SSH configuration:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one thing that was not apparent before... ssh config comes from root for this use case (in the docker use case you have to mount it in).

link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree]
instead of defaulting to *docker* daemon storage. Use the Atomic CLI to import
the installer image from the local *docker* engine to OSTree storage:
instead of *docker* daemon storage. As the root user, use the Atomic
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

atomic install and systemctl and journalctl are generally things you have to run as root. Is it customary to use the root prompt in docs?

+
----
$ systemctl start openshift-installer
----
+
There is no output from this command while the installer service is
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One potentially confusing thing the first time you run this.

playbook, but you can set it to the path of another playbook inside the
container.
To run any other playbooks (for example, to run the
xref:configuring-cluster-pre-install-checks[pre-install checks]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pre-install checks seemed a more relevant subject for this topic than the upgrade playbooks. Plus I'd really like to let people know they can run checks from this container.

--set INVENTORY_FILE=/etc/ansible/hosts \
--set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml \//<1>
--set INVENTORY_FILE=/path/to/inventory \
--set PLAYBOOK_FILE=playbooks/byo/openshift-checks/pre-install.yml \//<1>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually I have to update this - currently it won't work with a relative path.

@sosiouxme sosiouxme force-pushed the 20170913-containerized-installer branch from 9bbb592 to 495c0e4 Compare September 14, 2017 13:12
--set INVENTORY_FILE=/etc/ansible/hosts \
--set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml \//<1>
--set INVENTORY_FILE=/path/to/inventory \
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \//<1>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As currently implemented, you actually need the full path for the playbook. This will change "soon" with updates to the image and we'll be able to go back to relative paths (the full path should always work though).

@sosiouxme
Copy link
Member Author

sosiouxme commented Sep 14, 2017

@openshift/team-documentation these are changes for 3.6 and origin ... hopefully by the time 3.7 rolls around we will have an updated atomic with a better workflow and no longer tech preview.

EDIT: actually the better workflow is here already :) Updated...

@sosiouxme sosiouxme force-pushed the 20170913-containerized-installer branch from 495c0e4 to 7f9d6db Compare September 14, 2017 15:10
--storage=ostree \
--name=openshift-installer \//<1>
--set INVENTORY_FILE=/path/to/inventory \//<2>
ifdef::openshift-enterprise[]
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6
registry.access.redhat.com/openshift3/ose-ansible:v3.6
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ashcrow @giuseppe check me on this -- it looks like the latest version of atomic on RHEL now has more of the functionality you wanted here.

Previously, it needed to pull through docker, and it would import into ostree every time, and created a regular systemd service (not one-time, and no output, just journald, and service definition was retained after running).

That is still the behavior AFAICS if I specify the image with the docker: prefix, however if I leave that off, then it directly imports the image to ostree (no docker even running), creates a one-time service, runs it, and uninstalls it, which I think is the behavior you were wanting to get to.

Just wanted you to check if I'm missing anything here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[...]if I leave that off, then it directly imports the image to ostree (no docker even running), creates a one-time service, runs it, and uninstalls it, which I think is the behavior you were wanting to get to.

That is the atomic.run once feature that is much cleaner than the original one. Looks like RHEL 7.4 has atomic 1.18.1-3 which has the run once ability. I'm not sure if the docker import is still needed or not with that version. @giuseppe?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like with that feature, it doesn't even use the service name provided. I don't think it ever creates a systemd service per se, doesn't seem to send anything to journald. So I'll take out all the discussion of systemd service, which should make this even simpler :)

Copy link
Member Author

@sosiouxme sosiouxme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated per things I discovered about the atomic install command just now.

*docker* service. The installer image provides the same functionality
as the traditional RPM-based installer, but it runs in a containerized
environment that provides all of its dependencies rather than being
installed directly on the host.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the main advantage -- don't have to install a bunch of RPMs on your host. So I think it's worth mentioning.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed

+
----
$ atomic install --system \
# atomic install --system \
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

atomic install I think you have to run as root. Is it customary to use the root prompt in docs?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. It must be run as root.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it must run as root. "atomic install --system" would fail anyway if not running as root.

$ systemctl start openshift-installer
----
This command initiates the cluster installation using the inventory file specified and the root user's
SSH configuration, logging to the terminal as well as *_/var/log/ansible.log_*. The first time this is executed, the image will be imported into
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Things that were not apparent before... ssh config comes from root for this use case, and where to find logs.

Copy link
Member

@ashcrow ashcrow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

*docker* service. The installer image provides the same functionality
as the traditional RPM-based installer, but it runs in a containerized
environment that provides all of its dependencies rather than being
installed directly on the host.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed

+
----
$ atomic install --system \
# atomic install --system \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. It must be run as root.

--storage=ostree \
--name=openshift-installer \//<1>
--set INVENTORY_FILE=/path/to/inventory \//<2>
ifdef::openshift-enterprise[]
docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6
registry.access.redhat.com/openshift3/ose-ansible:v3.6
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[...]if I leave that off, then it directly imports the image to ostree (no docker even running), creates a one-time service, runs it, and uninstalls it, which I think is the behavior you were wanting to get to.

That is the atomic.run once feature that is much cleaner than the original one. Looks like RHEL 7.4 has atomic 1.18.1-3 which has the run once ability. I'm not sure if the docker import is still needed or not with that version. @giuseppe?

@ashcrow
Copy link
Member

ashcrow commented Sep 14, 2017

@sosiouxme thank you for working on this!

@sosiouxme sosiouxme force-pushed the 20170913-containerized-installer branch from 7f9d6db to ace08df Compare September 14, 2017 17:28
@sosiouxme
Copy link
Member Author

I added a commit to also discuss running the installer image via Docker. This can be worked on separately if preferred, though they are closely related.

@sosiouxme
Copy link
Member Author

@openshift/team-documentation can this PR get some love?

@ncbaratta ncbaratta added the peer-review-needed Signifies that the peer review team needs to review this PR label Nov 9, 2017
@vikram-redhat
Copy link
Contributor

@sosiouxme is this for 3.7?

@sosiouxme
Copy link
Member Author

sosiouxme commented Nov 10, 2017

@vikram-redhat it's not specific to 3.7, it's been relevant since it was written two months ago. 3.7 does not block on getting this in, if that's what you're asking.

Copy link

@ncbaratta ncbaratta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

This installer image provides the same functionality as the RPM-based
installer, but it runs in a containerized environment that provides all
of its dependencies rather than being installed directly on the host.
So, the only requirement to use it is the ability to run a container.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd remove the 'So,'

@ncbaratta ncbaratta added peer-review-done Signifies that the peer review team has reviewed this PR and removed peer-review-needed Signifies that the peer review team needs to review this PR labels Dec 18, 2017
@giuseppe
Copy link
Member

@sosiouxme could you please rebase this PR? It looks like this change didn't get it, and the documentation still reports the old way of using the system container installer.

@ashcrow
Copy link
Member

ashcrow commented Jan 11, 2018

👍 we've had some confusion over usage and I think this PR clears things up.

@sosiouxme sosiouxme force-pushed the 20170913-containerized-installer branch from 03aea35 to c46d96d Compare January 11, 2018 20:26
@openshift-ci-robot openshift-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jan 11, 2018
Update, simplify, and also note some things that were not apparent about
running as a system container.
Rearrange containerized install material to enable discussion of running
it as a Docker container.
@sosiouxme sosiouxme force-pushed the 20170913-containerized-installer branch from c46d96d to 7c24c89 Compare January 11, 2018 20:29
@sosiouxme
Copy link
Member Author

@giuseppe @ashcrow sure thing guys.
@vikram-redhat can this get moving now?


* SSH key(s) so that Ansible can reach your hosts.
* An Ansible inventory file.
* The location of an Ansible playbook to run against that inventory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/an/the

Note that this must be run by a non-`root` user with access to *docker*.

----
$ docker run -t -u `id -u` \
Copy link
Contributor

@gaurav-nelson gaurav-nelson Jan 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use callous for explaining this code instead of the list. For example:

$ docker run -t -u `id -u` \ <1>
    -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2> 

...and so on.

and after the code block use the callouts as:

<1> `-u `id -u`` makes the container run with the same UID as the current
user, which allows that user to use the SSH key inside the container (SSH
private keys are expected to be readable only by their owner).

<2> `-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z` mounts your
SSH key (`$HOME/.ssh/id_rsa`) under the container user's `$HOME/.ssh`
(*_/opt/app-root/src_* is the `$HOME` of the user in the container). If
you mount the SSH key into a non-standard location you can add an
environment variable with `-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point`
or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in
the inventory to point Ansible at it.

... and so on.

The text portions used above are examples only please do not use it as it is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This results in some really huge callouts, but I guess that's fine. I think originally there wasn't as clear a correspondence between the lines of command and the list explanation. Now the only real mismatch is that the third callout is about two lines, but I guess that's fine too.

or set `ansible_ssh_private_key_file=/the/mount/point` as a variable in
the inventory to point Ansible at it.
+
Note that the SSH key is mounted with the `:Z` flag: this is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/flag:/flag**.**

+
Note that the SSH key is mounted with the `:Z` flag: this is
required so that the container can read the SSH key under
its restricted SELinux context; this also means that your
Copy link
Contributor

@gaurav-nelson gaurav-nelson Jan 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/SELinux context; this also/SELinux context**.** This also

original SSH key file will be re-labeled to something like
`system_u:object_r:container_file_t:s0:c113,c247`. For more details
about `:Z` please check the `docker-run(1)` man page. Keep this in mind
when providing these volume mount specifications because this could
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/this could/this might

mount a static Ansible inventory file into the container as
*_/tmp/inventory_* and set the corresponding environment variable to
point at it. As with the SSH key, the inventory file SELinux labels may
need to be relabeled via the `:Z` flag to allow reading in the container,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/via/by using

@gaurav-nelson
Copy link
Contributor

@sosiouxme Thank you for the PR. I have suggested few updates, not big ones but multiple small ones.

Then I can merge this one after QE checks.

@ganhuang
Copy link

cc @wkshi

@ashcrow
Copy link
Member

ashcrow commented Jan 12, 2018

@gaurav-nelson @sosiouxme @wkshi thanks guys for working on this!

@sosiouxme
Copy link
Member Author

Thanks, I'll fix those up today.

@sosiouxme
Copy link
Member Author

@gaurav-nelson thanks for the review; I think I've addressed everything except #5258 (review) where I have a lingering question.

@gaurav-nelson
Copy link
Contributor

Thank you @sosiouxme for the updates, I am waiting for QE review before merging this. After that's done, I will do a follow up PR and then we can work on better explanation for your question. Thanks again!

Copy link

@wkshi wkshi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.(QE side)

@ashcrow
Copy link
Member

ashcrow commented Jan 16, 2018

@gaurav-nelson looks like we got QE approval 🎆 🎆

@gaurav-nelson gaurav-nelson merged commit ebae7ec into openshift:master Jan 17, 2018
@gaurav-nelson gaurav-nelson added this to the Next Release milestone Jan 17, 2018
@sosiouxme sosiouxme deleted the 20170913-containerized-installer branch January 17, 2018 13:37
@sosiouxme
Copy link
Member Author

👏

@adellape
Copy link
Contributor

adellape commented Feb 9, 2018

@gaurav-nelson I'm getting a lot of conflicts trying to pick something else from master to 3.9-stage, and I think it's because this PR should have gotten picked to 3.9 when it was picked to 3.6 and 3.7. I'm gonna try doing that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
branch/enterprise-3.6 branch/enterprise-3.7 peer-review-done Signifies that the peer review team has reviewed this PR size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.