Skip to content

Initial updates for OCP 3.9 install/upgrade #8107

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 13, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -503,10 +503,6 @@ Topics:
File: blue_green_deployments
- Name: Updating Operating Systems
File: os_upgrades
- Name: Migrating Embedded etcd to External etcd
File: migrating_embedded_etcd
- Name: Migrating etcd Data (v2 to v3)
File: migrating_etcd
- Name: Downgrading
File: downgrade
Distros: openshift-enterprise
Expand Down
3 changes: 1 addition & 2 deletions admin_guide/backup_restore.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -109,8 +109,7 @@ following sections), which depends on how *etcd* is deployed.

[WARNING]
====
Embedded etcd is no longer supported starting with {product-title} 3.7. See
xref:../upgrading/migrating_embedded_etcd.adoc#install-config-upgrading-etcd-data-migration[Migrating Embedded etcd to External etcd] for details.
Embedded etcd is no longer supported starting with {product-title} 3.7.
====


Expand Down
2 changes: 1 addition & 1 deletion admin_guide/manage_nodes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ To list all nodes that are known to the master:
----
$ oc get nodes
NAME STATUS AGE
master.example.com Ready,SchedulingDisabled 165d
master.example.com Ready 165d
node1.example.com Ready 165d
node2.example.com Ready 165d
----
Expand Down
5 changes: 3 additions & 2 deletions day_two_guide/topics/etcd_backup.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,9 @@ etcdctl version: 3.2.5
API version: 3.2
----

See xref:../upgrading/migrating_etcd.adoc[Migrating etcd Data (v2
to v3) section] for information about how to migrate to v3.
See
link:https://docs.openshift.com/container-platform/3.7/upgrading/migrating_etcd.html[Migrating etcd Data (v2 to v3) section] in the {product-title} 3.7 documentation for
information about how to migrate to v3.

=== Back up and restore etcd

Expand Down
16 changes: 8 additions & 8 deletions install_config/imagestreams_templates.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -169,18 +169,18 @@ endif::[]
+
ifdef::openshift-origin[]
----
$ IMAGESTREAMDIR=~/openshift-ansible/roles/openshift_examples/files/examples/v3.6/image-streams; \
DBTEMPLATES=~/openshift-ansible/roles/openshift_examples/files/examples/v3.6/db-templates; \
QSTEMPLATES=~/openshift-ansible/roles/openshift_examples/files/examples/v3.6/quickstart-templates
$ IMAGESTREAMDIR=~/openshift-ansible/roles/openshift_examples/files/examples/v3.9/image-streams; \
DBTEMPLATES=~/openshift-ansible/roles/openshift_examples/files/examples/v3.9/db-templates; \
QSTEMPLATES=~/openshift-ansible/roles/openshift_examples/files/examples/v3.9/quickstart-templates
----
endif::[]
ifdef::openshift-enterprise[]
----
$ IMAGESTREAMDIR="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.6/image-streams"; \
XPAASSTREAMDIR="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.6/xpaas-streams"; \
XPAASTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.6/xpaas-templates"; \
DBTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.6/db-templates"; \
QSTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.6/quickstart-templates"
$ IMAGESTREAMDIR="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.9/image-streams"; \
XPAASSTREAMDIR="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.9/xpaas-streams"; \
XPAASTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.9/xpaas-templates"; \
DBTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.9/db-templates"; \
QSTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.9/quickstart-templates"
----
endif::[]

Expand Down
106 changes: 60 additions & 46 deletions install_config/install/advanced_install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,19 +28,18 @@ own implementation using the configuration management tool of your choosing.
====
While RHEL Atomic Host is supported for running containerized {product-title}
services, the advanced installation method utilizes Ansible, which is not
available in RHEL Atomic Host, and must therefore be run from
available in RHEL Atomic Host. The RPM-based installer must therefore be run
from
ifdef::openshift-enterprise[]
a RHEL 7 system.
endif::[]
ifdef::openshift-origin[]
a supported version of Fedora, CentOS, or RHEL.
endif::[]
The host initiating the installation does not need to be intended for inclusion
in the {product-title} cluster, but it can be.

Alternatively, a
xref:running-the-advanced-installation-system-container[containerized version of the installer] is available as a system container, which is currently a
Technology Preview feature.
in the {product-title} cluster, but it can be. Alternatively, a
xref:running-the-advanced-installation-system-container[containerized version of the installer] is available as a system container, which can be run from a RHEL
Atomic Host system.
====

[NOTE]
Expand All @@ -58,7 +57,7 @@ and
xref:../../install_config/install/host_preparation.adoc#install-config-install-host-preparation[Host
Preparation] topics to prepare your hosts. This includes verifying system and
environment requirements per component type and properly installing and
configuring Docker. It also includes installing Ansible version 2.3 or later,
configuring Docker. It also includes installing Ansible version 2.4 or later,
as the advanced installation method is based on Ansible playbooks and as such
requires directly invoking Ansible.

Expand All @@ -73,11 +72,10 @@ xref:../../scaling_performance/install_practices.adoc#scaling-performance-instal

After following the instructions in the
xref:../../install_config/install/prerequisites.adoc#install-config-install-prerequisites[Prerequisites]
topic, deciding between the RPM and containerized methods and
xref:advanced-cloud-providers[choosing from the on-premises or cloud scenarios],
you can continue in this topic to
xref:configuring-ansible[Configuring Ansible Inventory Files].
topic and deciding between the RPM and containerized methods, you can continue
in this topic to xref:configuring-ansible[Configuring Ansible Inventory Files].

ifdef::openshift-origin[]
[[advanced-cloud-providers]]
=== Cloud installation

Expand Down Expand Up @@ -114,7 +112,8 @@ is no longer supported. For the Red Hat OpenStack 13 release, this process is re
link:https://github.com/openshift/openshift-ansible/tree/master/playbooks/openstack[Ansible driven deployment solution].
For automated installations, please follow that guide.
====
endif::[]
endif::openshift-enterprise[]
endif::openshift-origin[]

[[configuring-ansible]]
== Configuring Ansible Inventory Files
Expand Down Expand Up @@ -1301,20 +1300,19 @@ xref:../../architecture/networking/network_plugins.adoc#openshift-sdn[OpenShift
master.example.com
----

In order to ensure that your masters are not burdened with running pods, they
are automatically marked unschedulable by default by the installer, meaning that
new pods cannot be placed on the hosts. This is the same as setting the
`openshift_schedulable=false` host variable.

You can manually set a master host to schedulable during installation using the
`openshift_schedulable=true` host variable, though this is not recommended in
production environments:
In previous versions of {product-title}, master hosts were marked unschedulable
by default by the installer, meaning that new pods could be placed on the hosts.
Starting with {product-title} 3.9, however, masters must be marked schedulable;
this is done by setting the `openshift_schedulable=true` variable for the host:

----
[nodes]
master.example.com openshift_schedulable=true
----

This change is mainly so that the web console, which used to run as part of the
master itself, can instead be run as a pod deployed to the master.

If you want to change the schedulability of a host post-installation, see
xref:../../admin_guide/manage_nodes.adoc#marking-nodes-as-unschedulable-or-schedulable[Marking Nodes as Unschedulable or Schedulable].

Expand Down Expand Up @@ -1384,10 +1382,10 @@ you need to xref:advanced-install-configuring-registry-location[specify the desi
in the *_/etc/ansible/hosts_* file.

As described in xref:marking-masters-as-unschedulable-nodes[Configuring
Schedulability on Masters], master hosts are marked unschedulable by default. If
Schedulability on Masters], master hosts are marked schedulable by default. If
you label a master host with `region=infra` and have no other dedicated
infrastructure nodes, you must also explicitly mark these master hosts as
schedulable. Otherwise, the registry and router pods cannot be placed anywhere:
infrastructure nodes, the master hosts must also be marked as schedulable.
Otherwise, the registry and router pods cannot be placed anywhere:

----
[nodes]
Expand Down Expand Up @@ -1523,8 +1521,7 @@ xref:../../install_config/redeploying_certificates.adoc#install-config-redeployi
[[advanced-install-cluster-metrics]]
=== Configuring Cluster Metrics

Starting with {product-title} 3.7, cluster metrics are set to deploy
automatically by default during installation:
Cluster metrics are set to deploy automatically by default during installation:

[NOTE]
====
Expand Down Expand Up @@ -1629,13 +1626,11 @@ The remote volume path using the following options would be
Cluster logging is not set to automatically deploy by default. Set the
following to enable cluster logging when using the advanced installation method:

====
----
[OSEv3:vars]

openshift_logging_install_logging=true
----
====

[[advanced-installation-logging-storage]]
==== Configuring Logging Storage
Expand Down Expand Up @@ -1717,9 +1712,8 @@ The remote volume path using the following options would be
[[enabling-service-catalog]]
=== Configuring the Service Catalog

Starting with {product-title} 3.7, the
xref:../../architecture/service_catalog/index.adoc#architecture-additional-concepts-service-catalog[service
catalog] is enabled by default during installation. Enabling the service broker
The
xref:../../architecture/service_catalog/index.adoc#architecture-additional-concepts-service-catalog[service catalog] is enabled by default during installation. Enabling the service broker
allows service brokers to be registered with the catalog.

[NOTE]
Expand All @@ -1742,9 +1736,9 @@ as well; see xref:configuring-openshift-ansible-broker[Configuring the OpenShift
[[configuring-openshift-ansible-broker]]
=== Configuring the OpenShift Ansible Broker

Starting with {product-title} 3.7, the
xref:../../architecture/service_catalog/ansible_service_broker.adoc#arch-ansible-service-broker[OpenShift
Ansible broker] (OAB) is enabled by default. However, further configuration may be required for use.
The
xref:../../architecture/service_catalog/ansible_service_broker.adoc#arch-ansible-service-broker[OpenShift Ansible broker] (OAB) is enabled by default during installation. However,
further configuration may be required for use.

[[configuring-oab-storage]]
==== Configuring Persistent Storage for the OpenShift Ansible Broker
Expand All @@ -1755,9 +1749,30 @@ using persistent volumes (PVs) to function. If no PV is available, etcd will
wait until the PV can be satisfied. The OAB application will enter a `CrashLoop`
state until its etcd instance is available.

Some Ansible playbook bundles (APBs) may also require a PV for their own usage.
Two APBs are currently provided with {product-title} 3.7: MediaWiki and
PostgreSQL. Both of these require their own PV to deploy.
Some Ansible playbook bundles (APBs) also require a PV for their own usage in
order to deploy. For example, each of the database APBs have two plans: the
Development plan uses ephermal storage and does not require a PV, while the
Production plan is persisted and does require a PV.

[options="header"]
|===
|APB |PV Required?

|*postgresql-apb*
|Yes, but only for the Production plan

|*mysql-apb*
|Yes, but only for the Production plan

|*mariadb-apb*
|Yes, but only for the Production plan

|*mediawiki-apb*
|Yes

|===

To configure persistent storage for the OAB:

[NOTE]
====
Expand All @@ -1766,8 +1781,6 @@ but
xref:../../install_config/persistent_storage/index.adoc#install-config-persistent-storage-index[other persistent storage providers] can be used instead.
====

To configure persistent storage for the OAB:

. In your inventory file, add `nfs` to the `[OSEv3:children]` section to enable
the `[nfs]` group:
+
Expand Down Expand Up @@ -1837,7 +1850,8 @@ ansible_service_broker_local_registry_whitelist=['.*-apb$']
[[configuring-template-service-broker]]
=== Configuring the Template Service Broker

Starting with {product-title} 3.7, the xref:../../architecture/service_catalog/template_service_broker.adoc#arch-template-service-broke[template service broker] (TSB) is enabled by default.
The
xref:../../architecture/service_catalog/template_service_broker.adoc#arch-template-service-broke[template service broker] (TSB) is enabled by default during installation.

To configure the TSB, one or more projects must be defined as the broker's
source namespace(s) for loading templates and image streams into the service
Expand Down Expand Up @@ -2453,10 +2467,10 @@ concern for the install restarting *docker* on the host.
--storage=ostree \
--set INVENTORY_FILE=/path/to/inventory \ <1>
ifdef::openshift-enterprise[]
registry.access.redhat.com/openshift3/ose-ansible:v3.7
registry.access.redhat.com/openshift3/ose-ansible:v3.9
endif::[]
ifdef::openshift-origin[]
docker.io/openshift/origin-ansible:v3.7
docker.io/openshift/origin-ansible:v3.9
endif::[]
----
<1> Specify the location on the local host for your inventory file.
Expand Down Expand Up @@ -2492,10 +2506,10 @@ installation, use the following command:
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml \ <1>
--set OPTS="-v" \ <2>
ifdef::openshift-enterprise[]
registry.access.redhat.com/openshift3/ose-ansible:v3.7
registry.access.redhat.com/openshift3/ose-ansible:v3.9
endif::[]
ifdef::openshift-origin[]
docker.io/openshift/origin-ansible:v3.7
docker.io/openshift/origin-ansible:v3.9
endif::[]
----
<1> Set `PLAYBOOK_FILE` to the full path of the playbook starting at the
Expand Down Expand Up @@ -2537,10 +2551,10 @@ $ docker run -t -u `id -u` \ <1>
-e PLAYBOOK_FILE=playbooks/deploy_cluster.yml \ <4>
-e OPTS="-v" \ <5>
ifdef::openshift-enterprise[]
registry.access.redhat.com/openshift3/ose-ansible:v3.7
registry.access.redhat.com/openshift3/ose-ansible:v3.9
endif::[]
ifdef::openshift-origin[]
docker.io/openshift/origin-ansible:v3.7
docker.io/openshift/origin-ansible:v3.9
endif::[]
----
<1> `-u `id -u`` makes the container run with the same UID as the current
Expand Down Expand Up @@ -2673,7 +2687,7 @@ following as root:
# oc get nodes

NAME STATUS AGE
master.example.com Ready,SchedulingDisabled 165d
master.example.com Ready 165d
node1.example.com Ready 165d
node2.example.com Ready 165d
----
Expand Down
23 changes: 15 additions & 8 deletions install_config/install/disconnected_install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
= Disconnected Installation
{product-author}
{product-version}
:latest-tag: v3.7.9
:latest-int-tag: v3.7.9
:latest-registry-console-tag: v3.7.9
:latest-tag: v3.9.3
:latest-int-tag: v3.9.3
:latest-registry-console-tag: v3.9.3
:data-uri:
:icons:
:experimental:
Expand Down Expand Up @@ -132,7 +132,8 @@ $ subscription-manager repos \
--enable="rhel-7-server-rpms" \
--enable="rhel-7-server-extras-rpms" \
--enable="rhel-7-fast-datapath-rpms" \
--enable="rhel-7-server-ose-3.7-rpms"
--enable="rhel-7-server-ansible-2.4-rpms" \
--enable="rhel-7-server-ose-3.9-rpms"
----

. The `yum-utils` command provides the *reposync* utility, which lets you mirror
Expand Down Expand Up @@ -167,7 +168,8 @@ $ for repo in \
rhel-7-server-rpms \
rhel-7-server-extras-rpms \
rhel-7-fast-datapath-rpms \
rhel-7-server-ose-3.7-rpms
rhel-7-server-ansible-2.4-rpms \
rhel-7-server-ose-3.9-rpms
do
reposync --gpgcheck -lm --repoid=${repo} --download_path=/path/to/repos
createrepo -v </path/to/repos/>${repo} -o </path/to/repos/>${repo}
Expand Down Expand Up @@ -478,9 +480,14 @@ name=rhel-7-fast-datapath-rpms
baseurl=http://<server_IP>/repos/rhel-7-fast-datapath-rpms
enabled=1
gpgcheck=0
[rhel-7-server-ose-3.7-rpms]
name=rhel-7-server-ose-3.7-rpms
baseurl=http://<server_IP>/repos/rhel-7-server-ose-3.7-rpms
[rhel-7-server-ansible-2.4-rpms]
name=rhel-7-server-ansible-2.4-rpms
baseurl=http://<server_IP>/repos/rhel-7-server-ansible-2.4-rpms
enabled=1
gpgcheck=0
[rhel-7-server-ose-3.9-rpms]
name=rhel-7-server-ose-3.9-rpms
baseurl=http://<server_IP>/repos/rhel-7-server-ose-3.9-rpms
enabled=1
gpgcheck=0
----
Expand Down
Loading