- Overview
- Before You Begin
- Configuring Ansible Inventory Files
- Configuring Cluster Variables
- Configuring Deployment Type
- Configuring Host Variables
- Configuring Master API Port
- Configuring Cluster Pre-install Checks
- Configuring System Containers
- Configuring a Registry Location
- Configuring a Registry Route
- Configuring the Registry Console
- Configuring {gluster} Persistent Storage
- Configuring an OpenShift Container Registry
- Configuring Global Proxy Options
- Configuring Schedulability on Masters
- Configuring Node Host Labels
- Configuring Session Options
- Configuring Custom Certificates
- Configuring Certificate Validity
- Configuring Cluster Metrics
- Configuring Cluster Logging
- Configuring the Service Catalog
- Configuring the OpenShift Ansible Broker
- Configuring the Template Service Broker
- Configuring Web Console Customization
- Example Inventory Files
- Running the Advanced Installation
- Verifying the Installation
- Uninstalling {product-title}
- Known Issues
- What’s Next?
A reference configuration implemented using Ansible playbooks is available as the advanced installation method for installing a {product-title} cluster. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing.
Important
|
While RHEL Atomic Host is supported for running containerized {product-title} services, the advanced installation method utilizes Ansible, which is not available in RHEL Atomic Host. The RPM-based installer must therefore be run from The host initiating the installation does not need to be intended for inclusion in the {product-title} cluster, but it can be. Alternatively, a containerized version of the installer is available as a system container, which can be run from a RHEL Atomic Host system. |
Note
|
To install {product-title} as a stand-alone registry, see Installing a Stand-alone Registry. |
Before installing {product-title}, you must first see the Prerequisites and Host Preparation topics to prepare your hosts. This includes verifying system and environment requirements per component type and properly installing and configuring Docker. It also includes installing Ansible version 2.4 or later, as the advanced installation method is based on Ansible playbooks and as such requires directly invoking Ansible.
If you are interested in installing {product-title} using the containerized method (optional for RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to ensure that you understand the differences between these methods, then return to this topic to continue.
For large-scale installs, including suggestions for optimizing install time, see the Scaling and Performance Guide.
After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue in this topic to Configuring Ansible Inventory Files.
The /etc/ansible/hosts file is Ansible’s inventory file for the playbook used to install {product-title}. The inventory file describes the configuration for your {product-title} cluster. You must replace the default contents of the file with your desired configuration.
The following sections describe commonly-used variables to set in your inventory file during an advanced installation, followed by example inventory files you can use as a starting point for your installation.
Many of the Ansible variables described are optional. Accepting the default values should suffice for development environments, but for production environments, it is recommended you read through and become familiar with the various options available.
The example inventories describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the advanced installation.
Images require a version number policy in order to maintain updates. See the Image Version Tag Policy section in the Architecture Guide for more information.
To assign environment variables during the Ansible install that apply more globally to your {product-title} cluster overall, indicate the desired variables in the /etc/ansible/hosts file on separate, single lines within the [OSEv3:vars] section. For example:
[OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',}] openshift_master_default_subdomain=apps.test.example.com
The following tables describe variables for use with the Ansible installer that can be assigned cluster-wide:
Variable | Purpose | ||
---|---|---|---|
|
This variable sets the SSH user for the installer to use and defaults to
|
||
|
If |
||
|
This variable sets which INFO messages are logged to the
For more information on debug log levels, see Configuring Logging Levels. |
||
|
If set to |
||
|
Whether to enable Network Time Protocol (NTP) on cluster nodes.
|
||
|
This variable sets the parameter and arbitrary JSON values as per the requirement in your inventory hosts file. |
||
|
This variable enables API service auditing. See Audit Configuration for more information. |
||
|
This variable overrides the host name for the cluster, which defaults to the host name of the master. |
||
|
This variable overrides the public host name for the cluster, which defaults to the host name of the master. |
||
|
Optional. This variable defines the HA method when deploying multiple masters.
Supports the |
||
|
This variable enables rolling restarts of HA masters (i.e., masters are taken
down one at a time) when
running
the upgrade playbook directly. It defaults to |
||
|
This variable sets the identity provider. The default value is Deny All. If you use a supported identity provider, configure {product-title} to use it. |
||
|
These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information. |
||
|
|||
|
Validity of the auto-generated registry certificate in days. Defaults to |
||
|
Validity of the auto-generated CA certificate in days. Defaults to |
||
|
Validity of the auto-generated node certificate in days. Defaults to |
||
|
Validity of the auto-generated master certificate in days. Defaults to |
||
|
Validity of the auto-generated external etcd certificates in days. Controls
validity for etcd CA, peer, server and client certificates. Defaults to |
||
|
Set to |
||
|
These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information. |
||
|
|||
|
|||
|
|||
|
This variable configures |
||
|
Sets |
||
|
Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details. |
||
|
Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details. |
||
|
This variable enables the template service broker by specifying one or more namespaces whose templates will be served by the broker. |
||
|
This variable overrides the node selector that projects will use by default when
placing pods, which is defined by the |
||
|
{product-title} adds the specified additional registry or registries to the
docker configuration. These are the registries to search.
If the registry required to access the registry is other than For example: openshift_docker_additional_registries=example.com:443 |
||
|
{product-title} adds the specified additional insecure registry or registries to
the docker configuration. For any of these registries, secure sockets layer
(SSL) is not verified. Also, add these registries to |
||
|
{product-title} adds the specified blocked registry or registries to the
docker configuration. Block the listed registries. Setting this to |
||
|
This variable sets the host name for integration with the metrics console by
overriding |
||
|
This variable is a cluster identifier unique to the AWS Availability Zone. Using this avoids potential issues in Amazon Web Service (AWS) with multiple zones or multiple clusters. See Labeling Clusters for AWS for details. |
||
|
Use this variable to specify a container image tag to install or configure. |
||
|
Use this variable to specify an RPM version to install or configure. |
Warning
|
If you modify the
|
Variable | Purpose |
---|---|
|
This variable overrides the default subdomain to use for exposed routes. |
|
This variable configures which
OpenShift SDN plug-in to
use for the pod network, which defaults to |
|
This variable overrides the SDN cluster network CIDR block. This is the network
from which pod IPs are assigned. This network block should be a private block
and must not conflict with existing network blocks in your infrastructure to
which pods, nodes, or the master may require access. Defaults to |
|
This variable configures the subnet in which
services
will be created within the
{product-title}
SDN. This network block should be private and must not conflict with any
existing network blocks in your infrastructure to which pods, nodes, or the
master may require access to, or the installation will fail. Defaults to
|
|
This variable specifies the size of the per host subnet allocated for pod IPs
by
{product-title}
SDN. Defaults to |
|
This variable specifies the
service
proxy mode to use: either |
|
This variable enables flannel as an alternative networking layer instead of
the default SDN. If enabling flannel, disable the default SDN with the
|
|
Set to |
Various defaults used throughout the playbooks and roles used by the installer are based on the deployment type configuration (usually defined in an Ansible inventory file).
To assign environment variables to hosts during the Ansible installation, indicate the desired variables in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:
[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:
Variable | Purpose |
---|---|
|
This variable overrides the internal cluster host name for the system. Use this when the system’s default IP address does not resolve to the system host name. |
|
This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
This variable overrides the cluster internal IP address for the system. Use this
when using an interface that is not configured with the default
route. |
|
This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
If set to true, containerized {product-title} services are run on the target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details. |
|
This variable adds labels to nodes during installation. See Configuring Node Host Labels for more details. |
|
This variable is used to configure |
|
This variable configures additional The following example shows the configuration of Docker to use the "--log-driver json-file --log-opt max-size=1M --log-opt max-file=3" Do not use when
running |
|
This variable configures whether the host is marked as a schedulable node, meaning that it is available for placement of new pods. See Configuring Schedulability on Masters. |
To configure the default ports used by the master API, configure the following variables in the /etc/ansible/hosts file:
Variable | Purpose |
---|---|
|
This variable sets the port number to access the {product-title} API. |
For example:
openshift_master_api_port=3443
The web console port setting (openshift_master_console_port
) must match the
API server port (openshift_master_api_port
).
Pre-install checks are a set of diagnostic tasks that run as part of the openshift_health_checker Ansible role. They run prior to an Ansible installation of {product-title}, ensure that required inventory values are set, and identify potential issues on a host that can prevent or interfere with a successful installation.
The following table describes available pre-install checks that will run before every Ansible installation of {product-title}:
Check Name | Purpose |
---|---|
|
This check ensures that a host has the recommended amount of memory for the
specific deployment of {product-title}. Default values have been derived from
the
latest
installation documentation. A user-defined value for minimum memory
requirements may be set by setting the |
|
This check only runs on etcd, master, and node hosts. It ensures that the mount
path for an {product-title} installation has sufficient disk space remaining.
Recommended disk values are taken from the
latest
installation documentation. A user-defined value for minimum disk space
requirements may be set by setting |
|
Only runs on hosts that depend on the docker daemon (nodes and containerized
installations). Checks that docker's total usage does not exceed a
user-defined limit. If no user-defined limit is set, docker's maximum usage
threshold defaults to 90% of the total size available. The threshold limit for
total percent usage can be set with a variable in your inventory file:
|
|
Ensures that the docker daemon is using a storage driver supported by
{product-title}. If the |
|
Attempts to ensure that images required by an {product-title} installation are available either locally or in at least one of the configured container image registries on the host machine. |
|
Runs on |
|
Runs prior to non-containerized installations of {product-title}. Ensures that RPM packages required for the current installation are available. |
|
Checks whether a |
To disable specific pre-install checks, include the variable
openshift_disable_check
with a comma-delimited list of check names in your
inventory file. For example:
openshift_disable_check=memory_availability,disk_availability
Note
|
A similar set of health checks meant to run for diagnostics on existing clusters can be found in Ansible-based Health Checks. Another set of checks for checking certificate expiration can be found in Redeploying Certificates. |
System containers provide a way to containerize services that need to run before
the docker
daemon is running. They are Docker-formatted containers that use:
System containers are therefore stored and run outside of the traditional
docker
service. For more details on system container technology, see
Running System Containers in the Red Hat Enterprise Linux Atomic Host: Managing Containers documentation.
You can configure your {product-title} installation to run certain components as
system containers instead of their RPM or standard containerized methods.
Currently, the docker
and etcd components can be run as system containers in
{product-title}.
Warning
|
System containers are currently OS-specific because they require specific
versions of |
The traditional method for using docker
in an {product-title} cluster is an
RPM package installation. For Red Hat Enterprise Linux (RHEL) systems, it must be
specifically installed; for RHEL Atomic Host systems, it is provided by default.
However, you can configure your {product-title} installation to alternatively
run docker
on node hosts as a system container. When using the system
container method, the container-engine
container image and systemd service is
used on the host instead of the docker
package and service.
To run docker
as a system container:
-
Because the default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, for any RHEL systems you must still configure a thin pool logical volume for
docker
to use before running the {product-title} installation. You can skip these steps for any RHEL Atomic Host systems.For any RHEL systems, perform the steps described in the following sections:
After completing the storage configuration steps, you can leave the RPM installed.
-
Set the following cluster variable to
True
in your inventory file in the[OSEv3:vars]
section:openshift_docker_use_system_container=True
When using the system container method, the following inventory variables for
docker
are ignored:
-
docker_version
-
docker_upgrade
Further, the following inventory variable must not be used:
-
openshift_docker_options
You can also force docker
in the system container to use a specific container
registry and repository when pulling the container-engine
image instead of
from the default registry.access.redhat.com/openshift3/
. To do so, set the
following cluster variable in your inventory file in the [OSEv3:vars]
section:
openshift_docker_systemcontainer_image_override="<registry>/<user>/<image>:<tag>"
When using the RPM-based installation method for {product-title}, etcd is
installed using RPM packages on any RHEL systems. When using the containerized
installation method, the rhel7/etcd
image is used instead for RHEL or RHEL
Atomic Hosts.
However, you can configure your {product-title} installation to alternatively
run etcd as a system container. Whereas the standard containerized method uses
a systemd service named etcd_container
, the system container method uses the
service name etcd, same as the RPM-based method. The data directory for etcd
using this method is /var/lib/etcd.
To run etcd as a system container, set the following cluster variable in your
inventory file in the [OSEv3:vars]
section:
openshift_use_etcd_system_container=True
If you are using an image registry other than the default at
registry.access.redhat.com
, specify the desired registry within the
/etc/ansible/hosts file.
oreg_url={registry}/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true
Variable | Purpose |
---|---|
|
Set to the alternate image location. Necessary if you are not using the default registry at |
|
Set to |
|
Specify the additional registry or registries.
If the registry required to access the registry is other than |
For example:
oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true openshift_docker_additional_registries=example.com:443
To allow users to push and pull images to the internal Docker registry from outside of the {product-title} cluster, configure the registry route in the /etc/ansible/hosts file. By default, the registry route is docker-registry-default.router.default.svc.cluster.local.
Variable | Purpose |
---|---|
|
Set to the value of the desired registry route. The route contains either
a name that resolves to an infrastructure node where a router manages
communication or the subdomain that you set as the default application subdomain
wildcard value. For example, if you set the |
|
Set the paths to the registry certificates. If you do not provide values for the certificate locations, certificates are generated. You can define locations for the following certificates:
|
|
Set to one of the following values:
|
For example:
openshift_hosted_registry_routehost=<path> openshift_hosted_registry_routetermination=reencrypt openshift_hosted_registry_routecertificates= "{'certfile': '<path>/org-cert.pem', 'keyfile': '<path>/org-privkey.pem', 'cafile': '<path>/org-chain.pem'}"
If you are using a Cockpit registry console image other than the default or require a specific version of the console, specify the desired registry within the /etc/ansible/hosts file:
openshift_cockpit_deployer_prefix=<registry_name>/<namespace>/ openshift_cockpit_deployer_version=<cockpit_image_tag>
Variable | Purpose |
---|---|
|
Specify the URL and path to the directory where the image is located. |
|
Specify the Cockpit image version. |
For example: If your image is at
registry.example.com/openshift3/registry-console
and you require version
3.9.3, enter:
openshift_cockpit_deployer_prefix='registry.example.com/openshift3/' openshift_cockpit_deployer_version='3.9.3'
Additional information and examples, including the ones below, can be found at Persistent Storage Using {gluster}.
Important
|
See {gluster-native} Considerations for specific host preparations and prerequisites. |
An integrated OpenShift Container Registry can be deployed using the advanced installer.
If no registry storage options are used, the default OpenShift Container Registry is ephemeral and all data will be lost when the pod no longer exists. There are several options for enabling registry storage when using the advanced installer:
Note
|
The use of NFS for registry storage is not recommended in {product-title}. |
When the following variables are set, an NFS volume is created during an
advanced install with the path <nfs_directory>/<volume_name> on the host
within the [nfs]
host group. For example, the volume path using these options
would be /exports/registry:
[OSEv3:vars] openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi
Note
|
The use of NFS for registry storage is not recommended in {product-title}. |
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host. The remote volume path using the following options would be nfs.example.com:/exports/registry.
[OSEv3:vars] openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_host=nfs.example.com openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi
The use of NFS for the core {product-title} components is not recommended, as NFS (and the NFS Protocol) does not provide the proper consistency needed for the applications that make up the {product-title} infrastructure.
As a result, the installer and update playbooks require an option to enable the use of NFS with core infrastructure components.
# Enable unsupported configurations, things that will yield a partially # functioning cluster but would not be supported for production use #openshift_enable_unsupported_configurations=false
If you see the following messages when upgrading or installing an {product-title} 3.9.z cluster, then an additional step is required.
TASK [Run variable sanity checks] ********************************************** fatal: [host.example.com]: FAILED! => {"failed": true, "msg": "last_checked_host: host.example.com, last_checked_var: openshift_hosted_registry_storage_kind;nfs is an unsupported type for openshift_hosted_registry_storage_kind. openshift_enable_unsupported_configurations=True mustbe specified to continue with this configuration."}
In your Ansible inventory file, specify the following parameter:
[OSEv3:vars] openshift_enable_unsupported_configurations=True
An OpenStack storage configuration must already exist.
[OSEv3:vars] openshift_hosted_registry_storage_kind=openstack openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_openstack_filesystem=ext4 openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57 openshift_hosted_registry_storage_volume_size=10Gi
The simple storage solution (S3) bucket must already exist.
[OSEv3:vars] #openshift_hosted_registry_storage_kind=object #openshift_hosted_registry_storage_provider=s3 #openshift_hosted_registry_storage_s3_accesskey=access_key_id #openshift_hosted_registry_storage_s3_secretkey=secret_access_key #openshift_hosted_registry_storage_s3_bucket=bucket_name #openshift_hosted_registry_storage_s3_region=bucket_region #openshift_hosted_registry_storage_s3_chunksize=26214400 #openshift_hosted_registry_storage_s3_rootdirectory=/registry #openshift_hosted_registry_pullthrough=true #openshift_hosted_registry_acceptschema2=true #openshift_hosted_registry_enforcequota=true
If you are using a different S3 service, such as Minio or ExoScale, also add the region endpoint parameter:
openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/
Similar to configuring {gluster-native}, {gluster} can be configured to provide storage for an OpenShift Container Registry during the initial installation of the cluster to offer redundant and reliable storage for the registry.
Important
|
See {gluster-native} Considerations for specific host preparations and prerequisites. |
A GCS bucket must already exist.
[OSEv3:vars] openshift_hosted_registry_storage_provider=gcs openshift_hosted_registry_storage_gcs_bucket=bucket01 openshift_hosted_registry_storage_gcs_keyfile=test.key openshift_hosted_registry_storage_gcs_rootdirectory=/registry
If your hosts require use of a HTTP or HTTPS proxy in order to connect to external hosts, there are many components that must be configured to use the proxy, including masters, Docker, and builds. Node services only connect to the master API requiring no external access and therefore do not need to be configured to use a proxy.
In order to simplify this configuration, the following Ansible variables can be specified at a cluster or host level to apply these settings uniformly across your environment.
Note
|
See Configuring Global Build Defaults and Overrides for more information on how the proxy environment is defined for builds. |
Variable | Purpose |
---|---|
|
This variable specifies the |
|
This variable specifices the |
|
This variable is used to set the |
|
This boolean variable specifies whether or not the names of all defined
OpenShift hosts and |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the HTTP proxy used by |
|
This variable defines the HTTPS proxy used by |
If any of:
-
openshift_no_proxy
-
openshift_https_proxy
-
openshift_http_proxy
are set, then all cluster hosts will have an automatically generated NO_PROXY
environment variable injected into several service configuration scripts. The
default .svc
domain and your cluster’s dns_domain
(typically
.cluster.local
) will also be added.
Note
|
Setting |
Any hosts you designate as masters during the installation process should also
be configured as nodes so that the masters are configured as part of the
OpenShift SDN. You must do so by adding entries for these hosts to the [nodes]
section:
[nodes] master.example.com
In previous versions of {product-title}, master hosts were marked as unschedulable nodes by default by the installer, meaning that new pods could not be placed on the hosts. Starting with {product-title} 3.9, however, masters are marked schedulable automatically during installation. This change is mainly so that the web console, which used to run as part of the master itself, can instead be run as a pod deployed to the master.
If you want to change the schedulability of a host post-installation, see Marking Nodes as Unschedulable or Schedulable.
You can assign
labels to
node hosts during the Ansible install by configuring the /etc/ansible/hosts
file. Labels are useful for determining the placement of pods onto nodes using
the
scheduler.
Other than region=infra
(referred to as dedicated infrastructure nodes and
discussed further in Configuring
Dedicated Infrastructure Nodes), the actual label names and values are
arbitrary and can be assigned however you see fit per your cluster’s
requirements.
To assign labels to a node host during an Ansible install, use the
openshift_node_labels
variable with the desired labels added to the desired
node host entry in the [nodes]
section. In the following example, labels are
set for a region called primary
and a zone called east
:
[nodes] node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
Starting in {product-title} 3.9, masters are now marked as schedulable nodes by
default. As a result, the default node selector (defined in the master
configuration file’s projectConfig.defaultNodeSelector
field to determine
which node that projects will use by default when placing pods, and previously
left blank by default) is now set by default during cluster installations. It is
set to node-role.kubernetes.io/compute=true
unless overridden using the
osm_default_node_selector
Ansible variable.
In addition, whether osm_default_node_selector
is set or not, the following
automatic labeling occurs for hosts defined in your inventory file during
installation:
-
non-master, non-dedicated infrastructure nodes hosts (for example, the
node1.example.com
host shown above) are labeled withnode-role.kubernetes.io/compute=true
-
master nodes are are labeled
node-role.kubernetes.io/master=true
This ensures that the default node selector has available nodes to choose from when determining pod placement.
Important
|
If you accept the default node selector of
|
See Setting the Cluster-wide Default Node Selector for steps on adjusting this setting post-installation if needed.
It is recommended for production environments that you maintain dedicated infrastructure nodes where the registry and router pods can run separately from pods used for user applications.
The openshift_router_selector
and openshift_registry_selector
Ansible
settings determine the label selectors used when placing registry and router
pods. They are set to region=infra
by default:
# default selectors for router and registry services # openshift_router_selector='region=infra' # openshift_registry_selector='region=infra'
The registry and router are only able to run on node hosts with the
region=infra
label, which are then considered dedicated infrastructure nodes.
Ensure that at least one node host in your {product-title} environment has the
region=infra
label. For example:
[nodes] infra-node1.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}"
Important
|
If there is not a node in the |
If you do not intend to use {product-title} to manage the registry and router, configure the following Ansible settings:
openshift_hosted_manage_registry=false openshift_hosted_manage_router=false
If you are using an image registry other than the default registry.access.redhat.com
,
you need to specify the desired registry
in the /etc/ansible/hosts file.
As described in Configuring
Schedulability on Masters, master hosts are marked schedulable by default. If
you label a master host with region=infra
and have no other dedicated
infrastructure nodes, the master hosts must also be marked as schedulable.
Otherwise, the registry and router pods cannot be placed anywhere:
[nodes] master.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}" openshift_schedulable=true
Session
options in the OAuth configuration are configurable in the inventory file. By
default, Ansible populates a sessionSecretsFile
with generated
authentication and encryption secrets so that sessions generated by one master
can be decoded by the others. The default location is
/etc/origin/master/session-secrets.yaml, and this file will only be
re-created if deleted on all masters.
You can set the session name and maximum number of seconds with
openshift_master_session_name
and openshift_master_session_max_seconds
:
openshift_master_session_name=ssn openshift_master_session_max_seconds=3600
If provided, openshift_master_session_auth_secrets
and
openshift_master_encryption_secrets
must be equal length.
For openshift_master_session_auth_secrets
, used to authenticate sessions
using HMAC, it is recommended to use secrets with 32 or 64 bytes:
openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
For openshift_master_encryption_secrets
, used to encrypt sessions, secrets
must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:
openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
Custom serving certificates for the public host names of the {product-title} API and web console can be deployed during an advanced installation and are configurable in the inventory file.
Note
|
Custom certificates should only be configured for the host name associated with
the |
Certificate and key file paths can be configured using the
openshift_master_named_certificates
cluster variable:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "cafile": "/path/to/custom-ca1.crt"}]
File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed within the /etc/origin/master/named_certificates/ directory.
Ansible detects a certificate’s Common Name
and Subject Alternative Names
.
Detected names can be overridden by providing the "names"
key when setting
openshift_master_named_certificates
:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"], "cafile": "/path/to/custom-ca1.crt"}]
Certificates configured using openshift_master_named_certificates
are cached
on masters, meaning that each additional Ansible run with a different set of
certificates results in all previously deployed certificates remaining in place
on master hosts and within the master configuration file.
If you would like openshift_master_named_certificates
to be overwritten with
the provided value (or no value), specify the
openshift_master_overwrite_named_certificates
cluster variable:
openshift_master_overwrite_named_certificates=true
For a more complete example, consider the following cluster variables in an inventory file:
openshift_master_cluster_method=native openshift_master_cluster_hostname=lb-internal.openshift.com openshift_master_cluster_public_hostname=custom.openshift.com
To overwrite the certificates on a subsequent Ansible run, you could set the following:
openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key", "names": ["custom.openshift.com"]}] openshift_master_overwrite_named_certificates=true
By default, the certificates used to govern the etcd, master, and kubelet expire after two to five years. The validity (length in days until they expire) for the auto-generated registry, CA, node, and master certificates can be configured during installation using the following variables (default values shown):
[OSEv3:vars] openshift_hosted_registry_cert_expire_days=730 openshift_ca_cert_expire_days=1825 openshift_node_cert_expire_days=730 openshift_master_cert_expire_days=730 etcd_ca_default_days=1825
These values are also used when redeploying certificates via Ansible post-installation.
Cluster metrics are not set to automatically deploy. Set the following to enable cluster metrics when using the advanced installation method:
[OSEv3:vars] openshift_metrics_install_metrics=true
The metrics public URL can be set during cluster
installation using the openshift_metrics_hawkular_hostname
Ansible variable,
which defaults to:
https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics
If you alter this variable, ensure the host name is accessible via your router.
openshift_metrics_hawkular_hostname=hawkular-metrics.{{openshift_master_default_subdomain}}
The openshift_metrics_cassandra_storage_type
variable must be set in order to
use persistent storage for metrics. If
openshift_metrics_cassandra_storage_type
is not set, then cluster metrics data
is stored in an emptyDir
volume, which will be deleted when the Cassandra pod
terminates.
There are three options for enabling cluster metrics storage when using the advanced install:
Use the following variable if your {product-title} environment supports dynamic volume provisioning for your cloud provider:
[OSEv3:vars] openshift_metrics_cassandra_storage_type=dynamic
Important
|
The use of NFS for metrics storage is not recommended in {product-title}. |
When the following variables are set, an NFS volume is created during an
advanced install with path <nfs_directory>/<volume_name> on the host within
the [nfs]
host group. For example, the volume path using these options would
be /exports/metrics:
[OSEv3:vars] openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_nfs_options='*(rw,root_squash)' openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=10Gi
Important
|
The use of NFS for metrics storage is not recommended in {product-title}. |
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_host=nfs.example.com openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=10Gi
The remote volume path using the following options would be nfs.example.com:/exports/metrics.
The use of NFS for the core {product-title} components is not recommended, as NFS (and the NFS Protocol) does not provide the proper consistency needed for the applications that make up the {product-title} infrastructure.
As a result, the installer and update playbooks require an option to enable the use of NFS with core infrastructure components.
# Enable unsupported configurations, things that will yield a partially # functioning cluster but would not be supported for production use #openshift_enable_unsupported_configurations=false
If you see the following messages when upgrading or installing an {product-title} 3.9.z cluster, then an additional step is required.
TASK [Run variable sanity checks] ********************************************** fatal: [host.example.com]: FAILED! => {"failed": true, "msg": "last_checked_host: host.example.com, last_checked_var: openshift_hosted_registry_storage_kind;nfs is an unsupported type for openshift_hosted_registry_storage_kind. openshift_enable_unsupported_configurations=True mustbe specified to continue with this configuration."}
In your Ansible inventory file, specify the following parameter:
[OSEv3:vars] openshift_enable_unsupported_configurations=True
Cluster logging is not set to automatically deploy by default. Set the following to enable cluster logging when using the advanced installation method:
[OSEv3:vars] openshift_logging_install_logging=true
The openshift_logging_es_pvc_dynamic
variable must be set in order to use
persistent storage for logging. If openshift_logging_es_pvc_dynamic
is
not set, then cluster logging data is stored in an emptyDir
volume, which will
be deleted when the Elasticsearch pod terminates.
There are three options for enabling cluster logging storage when using the advanced install:
Use the following variable if your {product-title} environment supports dynamic volume provisioning for your cloud provider:
[OSEv3:vars] openshift_logging_es_pvc_dynamic=true
Important
|
The use of NFS for logging storage is not recommended in {product-title}. |
When the following variables are set, an NFS volume is created during an
advanced install with path <nfs_directory>/<volume_name> on the host within
the [nfs]
host group. For example, the volume path using these options would be
/exports/logging:
[OSEv3:vars] openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_nfs_directory=/exports openshift_logging_storage_nfs_options='*(rw,root_squash)' openshift_logging_storage_volume_name=logging openshift_logging_storage_volume_size=10Gi
Important
|
The use of NFS for logging storage is not recommended in {product-title}. |
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_host=nfs.example.com openshift_logging_storage_nfs_directory=/exports openshift_logging_storage_volume_name=logging openshift_logging_storage_volume_size=10Gi
The remote volume path using the following options would be nfs.example.com:/exports/logging.
The use of NFS for the core {product-title} components is not recommended, as NFS (and the NFS Protocol) does not provide the proper consistency needed for the applications that make up the {product-title} infrastructure.
As a result, the installer and update playbooks require an option to enable the use of NFS with core infrastructure components.
# Enable unsupported configurations, things that will yield a partially # functioning cluster but would not be supported for production use #openshift_enable_unsupported_configurations=false
If you see the following messages when upgrading or installing an {product-title} 3.9.z cluster, then an additional step is required.
TASK [Run variable sanity checks] ********************************************** fatal: [host.example.com]: FAILED! => {"failed": true, "msg": "last_checked_host: host.example.com, last_checked_var: openshift_hosted_registry_storage_kind;nfs is an unsupported type for openshift_hosted_registry_storage_kind. openshift_enable_unsupported_configurations=True mustbe specified to continue with this configuration."}
In your Ansible inventory file, specify the following parameter:
[OSEv3:vars] openshift_enable_unsupported_configurations=True
The service catalog is enabled by default during installation. Enabling the service broker allows service brokers to be registered with the catalog.
Note
|
To disable automatic deployment, set the following cluster variables in your inventory file: openshift_enable_service_catalog=false |
When the service catalog is enabled, the OpenShift Ansible broker and template service broker are both enabled as well; see Configuring the OpenShift Ansible Broker and Configuring the Template Service Broker for more information.
The OpenShift Ansible broker (OAB) is enabled by default during installation. However, further configuration may be required for use.
The OAB deploys its own etcd instance separate from the etcd used by the rest of
the {product-title} cluster. The OAB’s etcd instance requires separate storage
using persistent volumes (PVs) to function. If no PV is available, etcd will
wait until the PV can be satisfied. The OAB application will enter a CrashLoop
state until its etcd instance is available.
Some Ansible playbook bundles (APBs) also require a PV for their own usage in order to deploy. For example, each of the database APBs have two plans: the Development plan uses ephermal storage and does not require a PV, while the Production plan is persisted and does require a PV.
APB | PV Required? |
---|---|
postgresql-apb |
Yes, but only for the Production plan |
mysql-apb |
Yes, but only for the Production plan |
mariadb-apb |
Yes, but only for the Production plan |
mediawiki-apb |
Yes |
To configure persistent storage for the OAB:
Note
|
The following example shows usage of an NFS host to provide the required PVs, but other persistent storage providers can be used instead. |
-
In your inventory file, add
nfs
to the[OSEv3:children]
section to enable the[nfs]
group:[OSEv3:children] masters nodes nfs
-
Add a
[nfs]
group section and add the host name for the system that will be the NFS host:[nfs] master1.example.com
-
Add the following in the
[OSEv3:vars]
section:openshift_hosted_etcd_storage_kind=nfs openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd (1) openshift_hosted_etcd_storage_volume_name=etcd-vol2 (1) openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=1G openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
-
An NFS volume will be created with path
<nfs_directory>/<volume_name>
on the host within the[nfs]
group. For example, the volume path using these options would be /opt/osev3-etcd/etcd-vol2.These settings create a persistent volume that is attached to the OAB’s etcd instance during cluster installation.
-
In order to do APB development with the OpenShift Container Registry in conjunction with the OAB, a whitelist of images the OAB can access must be defined. If a whitelist is not defined, the broker will ignore APBs and users will not see any APBs available.
By default, the whitelist is empty so that a user cannot add APB images to the
broker without a cluster administrator configuring the broker. To whitelist all
images that end in -apb
:
-
In your inventory file, add the following to the
[OSEv3:vars]
section:ansible_service_broker_local_registry_whitelist=['.*-apb$']
The template service broker (TSB) is enabled by default during installation.
To configure the TSB, one or more projects must be defined as the broker’s
source namespace(s) for loading templates and image streams into the service
catalog. Set the desired projects by modifying the following in your inventory
file’s [OSEv3:vars]
section:
openshift_template_service_broker_namespaces=['openshift','myproject']
The following Ansible variables set master configuration options for customizing the web console. See Customizing the Web Console for more details on these customization options.
Variable | Purpose |
---|---|
|
Determines whether to install the web console. Can be set to |
|
The prefix for the component images. For example, with |
|
The version for the component images. For example, with |
|
Sets |
|
Sets |
|
Sets |
|
Sets the OAuth template in the master configuration. See Customizing the Login Page for details. Example value: |
|
Sets |
|
Sets |
|
Configurate the web console to log the user out automatically after a period of inactivity. Must be a whole number greater than or equal to 5, or 0 to disable the feature. Defaults to 0 (disabled). |
|
Boolean value indicating if the cluster is configured for overcommit. When |
You can configure an environment with a single master and multiple nodes, and either a single or multiple number of external etcd hosts.
Note
|
Moving from a single master cluster to multiple masters after installation is not supported. |
The following table describes an example environment for a single
master
(with a single etcd on the same host), two
nodes
for hosting user applications, and two nodes with the region=infra
label for hosting
dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
Master, etcd, and node |
node1.example.com |
Node |
node2.example.com |
|
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root # If ansible_ssh_user is not root, ansible_become must be set to true #ansible_become=true # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] master.example.com # host group for nodes, includes region info [nodes] master.example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
Important
|
See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in {product-title} 3.9. |
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
The following table describes an example environment for a single
master,
three
etcd
hosts, two
nodes
for hosting user applications, and two nodes with the region=infra
label for hosting
dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
Master and node |
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
|
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master.example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.
Note
|
Moving from a single master cluster to multiple masters after installation is not supported. |
When configuring multiple masters, the advanced installation supports the
native
high availability (HA) method. This method leverages the native HA
master capabilities built into {product-title} and can be combined with any load
balancing solution.
If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured an external load balancing solution of your choice to balance the master API (port 8443) on all master hosts.
Note
|
This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer. |
For an external load balancing solution, you must have:
-
A pre-created load balancer virtual IP (VIP) configured for SSL passthrough.
-
A VIP listening on the port specified by the
openshift_master_api_port
value (8443 by default) and proxying back to all master hosts on that port. -
A domain name for VIP registered in DNS.
-
The domain name will become the value of both
openshift_master_cluster_public_hostname
andopenshift_master_cluster_hostname
in the {product-title} installer.
-
See the External Load Balancer Integrations example in Github for more information. For more on the high availability master architecture, see Kubernetes Infrastructure.
Note
|
The advanced installation method does not currently support multiple HAProxy load balancers in an active-passive setup. See the Load Balancer Administration documentation for post-installation amendments. |
To configure multiple masters, refer to Multiple Masters with Multiple etcd
The following describes an example environment for three
masters
using the native
HA method:, one HAProxy load balancer, three
etcd
hosts, two
nodes
for hosting user applications, and two nodes with the region=infra
label for hosting
dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com |
Master (clustered using native HA) and node |
master2.example.com |
|
master3.example.com |
|
lb.example.com |
HAProxy to load balance API master endpoints |
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
|
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # Native high availbility cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-internal.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # apply updated node defaults openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']} # enable ntp on masters to ensure proper failover openshift_clock_enabled=true # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
The following describes an example environment for three
masters
using the native
HA method (with
etcd
on each host), one HAProxy load balancer, two
nodes
for hosting user applications, and two nodes with the region=infra
label for hosting
dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com |
Master (clustered using native HA) and node with etcd on each host |
master2.example.com |
|
master3.example.com |
|
lb.example.com |
HAProxy to load balance API master endpoints |
node1.example.com |
Node |
node2.example.com |
|
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # Native high availability cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-internal.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] master1.example.com master2.example.com master3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, you run the advanced installation playbook via Ansible.
The installer uses modularized playbooks allowing administrators to install specific components as needed. By breaking up the roles and playbooks, there is better targeting of ad hoc administration tasks. This results in an increased level of control during installations and results in time savings.
Note
|
The playbooks and their ordering are detailed below in Running Individual Component Playbooks. |
The RPM-based installer uses Ansible installed via RPM packages to run playbooks and configuration files available on the local host.
Important
|
Do not run OpenShift Ansible playbooks under |
To run the RPM-based installer:
-
Run the prequisites.yml playbook. This must be run only once before deploying a new cluster. Use the following command, specifying
-i
if your inventory file located somewhere other than /etc/ansible/hosts: -
Run the deploy_cluster.yml playbook to initiate the cluster installation:
If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
The image is a containerized version of the {product-title} installer. This installer image provides the same functionality as the RPM-based installer, but it runs in a containerized environment that provides all of its dependencies rather than being installed directly on the host. The only requirement to use it is the ability to run a container.
The installer image can be used as a system container. System containers are stored and run outside of the traditional docker service. This enables running the installer image from one of the target hosts without concern for the install restarting docker on the host.
To use the Atomic CLI to run the installer as a run-once system container, perform the following steps as the root user:
-
Run the prerequisites.yml playbook:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ (1) --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml \ --set OPTS="-v" \
-
Specify the location on the local host for your inventory file.
This command runs a set of prerequiste tasks by using the inventory file specified and the
root
user’s SSH configuration.
-
-
Run the deploy_cluster.yml playbook:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ (1) --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml \ --set OPTS="-v" \
-
Specify the location on the local host for your inventory file.
This command initiates the cluster installation by using the inventory file specified and the
root
user’s SSH configuration. It logs the output on the terminal and also saves it in the /var/log/ansible.log file. The first time this command is run, the image is imported into OSTree storage (system containers use this rather than docker daemon storage). On subsequent runs, it reuses the stored image.If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
-
You can use the PLAYBOOK_FILE
environment variable to specify other playbooks
you want to run by using the containerized installer. The default value of the PLAYBOOK_FILE
is
/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml, which is the
main cluster installation playbook, but you can set it to the path of another
playbook inside the container.
For example, to run the pre-install checks playbook before installation, use the following command:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml \ (1) --set OPTS="-v" \ (2)
-
Set
PLAYBOOK_FILE
to the full path of the playbook starting at the playbooks/ directory. Playbooks are located in the same locations as with the RPM-based installer. -
Set
OPTS
to add command line options toansible-playbook
.
The installer image can also run as a docker container anywhere that docker can run.
Warning
|
This method must not be used to run the installer on one of the hosts being configured, as the install may restart docker on the host, disrupting the installer container execution. |
Note
|
Although this method and the system container method above use the same image, they run with different entry points and contexts, so runtime parameters are not the same. |
At a minimum, when running the installer as a docker container you must provide:
-
SSH key(s), so that Ansible can reach your hosts.
-
An Ansible inventory file.
-
The location of the Ansible playbook to run against that inventory.
Here is an example of how to run an install via docker
, which must be run by a
non-root user with access to docker
:
-
First, run the prerequisites.yml playbook:
$ docker run -t -u `id -u` \ (1) -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ (2) -v $HOME/ansible/hosts:/tmp/inventory:Z \ (3) -e INVENTORY_FILE=/tmp/inventory \ (3) -e PLAYBOOK_FILE=playbooks/prerequisites.yml \ (4) -e OPTS="-v" \ (5)
-
-u `id -u`
makes the container run with the same UID as the current user, which allows that user to use the SSH key inside the container (SSH private keys are expected to be readable only by their owner). -
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z
mounts your SSH key ($HOME/.ssh/id_rsa
) under the container user’s$HOME/.ssh
(/opt/app-root/src is the$HOME
of the user in the container). If you mount the SSH key into a non-standard location you can add an environment variable with-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point
or setansible_ssh_private_key_file=/the/mount/point
as a variable in the inventory to point Ansible at it. Note that the SSH key is mounted with the:Z
flag. This is required so that the container can read the SSH key under its restricted SELinux context. This also means that your original SSH key file will be re-labeled to something likesystem_u:object_r:container_file_t:s0:c113,c247
. For more details about:Z
, check thedocker-run(1)
man page. Keep this in mind when providing these volume mount specifications because this might have unexpected consequences: for example, if you mount (and therefore re-label) your whole$HOME/.ssh
directory it will block the host’s sshd from accessing your public keys to login. For this reason you may want to use a separate copy of the SSH key (or directory), so that the original file labels remain untouched. -
-v $HOME/ansible/hosts:/tmp/inventory:Z
and-e INVENTORY_FILE=/tmp/inventory
mount a static Ansible inventory file into the container as /tmp/inventory and set the corresponding environment variable to point at it. As with the SSH key, the inventory file SELinux labels may need to be relabeled by using the:Z
flag to allow reading in the container, depending on the existing label (for files in a user$HOME
directory this is likely to be needed). So again you may prefer to copy the inventory to a dedicated location before mounting it. The inventory file can also be downloaded from a web server if you specify theINVENTORY_URL
environment variable, or generated dynamically usingDYNAMIC_SCRIPT_URL
to specify an executable script that provides a dynamic inventory. -
-e PLAYBOOK_FILE=playbooks/prerequisites.yml
specifies the playbook to run (in this example, the prereqsuites playbook) as a relative path from the top level directory of openshift-ansible content. The full path from the RPM can also be used, as well as the path to any other playbook file in the container. -
-e OPTS="-v"
supplies arbitrary command line options (in this case,-v
to increase verbosity) to theansible-playbook
command that runs inside the container.
-
-
Next, run the deploy_cluster.yml playbook to initiate the cluster installation:
$ docker run -t -u `id -u` \ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ -v $HOME/ansible/hosts:/tmp/inventory:Z \ -e INVENTORY_FILE=/tmp/inventory \ -e PLAYBOOK_FILE=playbooks/deploy_cluster.yml \ -e OPTS="-v" \
The main installation playbook {pb-prefix}playbooks/deploy_cluster.yml runs a set of individual component playbooks in a specific order, and the installer reports back at the end what phases you have gone through. If the installation fails during a phase, you are notified on the screen along with the errors from the Ansible run.
After you resolve the issue, rather than run the entire installation over again, you can pick up from the failed phase. You must then run each of the remaining playbooks in order:
# ansible-playbook [-i /path/to/inventory] <playbook_file_location>
The following table is sorted in order of when each individual component playbook is run:
Playbook Name | File Location |
---|---|
Health Check |
{pb-prefix}playbooks/openshift-checks/pre-install.yml |
etcd Install |
{pb-prefix}playbooks/openshift-etcd/config.yml |
NFS Install |
{pb-prefix}playbooks/openshift-nfs/config.yml |
Load Balancer Install |
{pb-prefix}playbooks/openshift-loadbalancer/config.yml |
Master Install |
{pb-prefix}playbooks/openshift-master/config.yml |
Master Additional Install |
{pb-prefix}playbooks/openshift-master/additional_config.yml |
Node Install |
{pb-prefix}playbooks/openshift-node/config.yml |
GlusterFS Install |
{pb-prefix}playbooks/openshift-glusterfs/config.yml |
Hosted Install |
{pb-prefix}playbooks/openshift-hosted/config.yml |
Web Console Install |
{pb-prefix}playbooks/openshift-web-console/config.yml |
Metrics Install |
{pb-prefix}playbooks/openshift-metrics/config.yml |
Logging Install |
{pb-prefix}playbooks/openshift-logging/config.yml |
Prometheus Install |
{pb-prefix}playbooks/openshift-prometheus/config.yml |
Service Catalog Install |
{pb-prefix}playbooks/openshift-service-catalog/config.yml |
Management Install |
{pb-prefix}playbooks/openshift-management/config.yml |
After the installation completes:
-
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.9.1+a0ce1bc657 node1.example.com Ready compute 7h v1.9.1+a0ce1bc657 node2.example.com Ready compute 7h v1.9.1+a0ce1bc657
-
To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.
For example, for a master host with a host name of
master.openshift.com
and using the default port of8443
, the web console would be found athttps://master.openshift.com:8443/console
.
If you installed multiple etcd hosts:
-
First, verify that the etcd package, which provides the
etcdctl
command, is installed:# yum install etcd
-
On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key cluster-health
-
Also verify the member list is correct:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key member list
If you installed multiple masters using HAProxy as a load balancer, browse to the following URL according to your [lb] section definition and check HAProxy’s status:
http://<lb_hostname>:9000
You can verify your installation by consulting the HAProxy Configuration documentation.
You can uninstall {product-title} hosts in your cluster by running the uninstall.yml playbook. This playbook deletes {product-title} content installed by Ansible, including:
-
Configuration
-
Containers
-
Default templates and image streams
-
Images
-
RPM packages
The playbook will delete content for any hosts defined in the inventory file that you specify when running the playbook. If you want to uninstall {product-title} across all hosts in your cluster, run the playbook using the inventory file you used when installing {product-title} initially or ran most recently:
You can also uninstall node components from specific hosts using the uninstall.yml playbook while leaving the remaining hosts and cluster alone:
Warning
|
This method should only be used when attempting to uninstall specific node hosts and not for specific masters or etcd hosts, which would require further configuration changes within the cluster. |
-
First follow the steps in Deleting Nodes to remove the node object from the cluster, then continue with the remaining steps in this procedure.
-
Create a different inventory file that only references those hosts. For example, to only delete content from one node:
[OSEv3:children] nodes (1) [OSEv3:vars] ansible_ssh_user=root [nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" (2)
-
Only include the sections that pertain to the hosts you are interested in uninstalling.
-
Only include hosts that you want to uninstall.
-
-
Specify that new inventory file using the
-i
option when running the uninstall.yml playbook:
When the playbook completes, all {product-title} content should be removed from any specified hosts.
-
On failover in multiple master clusters, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See kubernetes/kubernetes#10030 for details.
-
On failure of the Ansible installer, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are using bare metal machines, see Uninstalling {product-title} for instructions.
Now that you have a working {product-title} instance, you can:
-
Deploy an integrated Docker registry.
-
Deploy a router.