diff --git a/install_config/install/advanced_install.adoc b/install_config/install/advanced_install.adoc index 2691b70bb07e..00c4a681bb8b 100644 --- a/install_config/install/advanced_install.adoc +++ b/install_config/install/advanced_install.adoc @@ -32,6 +32,9 @@ endif::[] The host initiating the installation does not need to be intended for inclusion in the {product-title} cluster, but it can be. +Alternatively, a +xref:running-the-advanced-installation-system-container[containerized version of the installer] is available as a system container, which is currently a +Technology Preview feature. ==== ifdef::openshift-enterprise[] @@ -68,9 +71,59 @@ see the xref:../../scaling_performance/install_practices.adoc#scaling-performance-install-best-practices[Scaling and Performance Guide]. After following the instructions in the -xref:../../install_config/install/prerequisites.adoc#install-config-install-prerequisites[Prerequisites] topic and -deciding between the RPM and containerized methods, you can continue in this -topic to xref:configuring-ansible[Configuring Ansible Inventory Files]. +xref:../../install_config/install/prerequisites.adoc#install-config-install-prerequisites[Prerequisites] +topic, deciding between the RPM and containerized methods and +xref:advanced-cloud-providers[choosing from the on-premises or cloud scenarios], +you can continue in this topic to +xref:configuring-ansible[Configuring Ansible Inventory Files]. + +[[advanced-cloud-providers]] +=== Choosing clouds over on-premises + +Provisioning of VMs in a cloud, defining your cloud hosted infrastructure and +applying post-provision configuration may be assisted with Ansible +automation playbooks for the supported cloud providers. This advanced installation +guide is happen to fulfill that purpose as well. + +ifdef::openshift-enterprise[] +==== +A managed dedicated cloud hosted infrastructure may be alternatively +operated as a service by Red Hat, see the +link:https://www.openshift.com/dedicated/index.html[OpenShift Dedicated] +product offering for more details. +==== +endif::[] + +==== OpenStack provider + +In order to install {product-title} with manual steps using OpenStack CLI, +see the +link:https://access.redhat.com/documentation/en-us/reference_architectures/2017/html-single/deploying_and_managing_red_hat_openshift_container_platform_3.6_on_red_hat_openstack_platform_10[reference architecture], +which is actual for {product-title} 3.6 and Red Hat OpenStack Platform 10. + +As a prerequisite, you will have to provision VMs and configure the cloud +infrastructure, like network, storage, firewall and security groups. +These configuration tasks may be assisted by that reference architecture, +the +xref:../../install_config/install/prerequisites#prereq-cloud-provider-considerations[cloud provider considerations] +and link:https://github.com/openshift/openshift-ansible/tree/master/playbooks/openstack[Ansible playbooks] +to automate it. See also +xref:../../install_config/configuring_openstack#install-config-configuring-openstack[Configuring for OpenStack] +and +xref:configuring-ansible[Configuring Ansible Inventory Files]. + +ifdef::openshift-enterprise[] + +[IMPORTANT] +==== +The reference architecture for automated installations based on +link:https://docs.openstack.org/heat/latest[OpenStack Heat] templates for +link:https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_red_hat_openshift_container_platform_3.4_on_red_hat_openstack_platform_10[{product-title} 3.4 on Red Hat OpenStack Platform 10] +is not supported anymore. For OSP 13 time frame, it is being replaced with the +link:https://github.com/openshift/openshift-ansible/tree/master/playbooks/openstack[Ansible driven deployment solution]. +For automated installations, please follow that guide instead! +==== +endif::[] [[configuring-ansible]] == Configuring Ansible Inventory Files @@ -203,7 +256,7 @@ enables rolling, full system restarts and also works for single master clusters. |`os_sdn_network_plugin_name` |This variable configures which -xref:../../architecture/additional_concepts/sdn.adoc#architecture-additional-concepts-sdn[OpenShift SDN plug-in] to +xref:../../architecture/networking/sdn.adoc#architecture-additional-concepts-sdn[OpenShift SDN plug-in] to use for the pod network, which defaults to `redhat/openshift-ovs-subnet` for the standard SDN plug-in. Set the variable to `redhat/openshift-ovs-multitenant` to use the multitenant plug-in. @@ -254,7 +307,7 @@ options] in the OAuth configuration. See xref:advanced-install-session-options[C |This variable configures the subnet in which xref:../../architecture/core_concepts/pods_and_services.adoc#services[services] will be created within the -xref:../../architecture/additional_concepts/sdn.adoc#architecture-additional-concepts-sdn[{product-title} +xref:../../architecture/networking/sdn.adoc#architecture-additional-concepts-sdn[{product-title} SDN]. This network block should be private and must not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to @@ -262,7 +315,7 @@ master may require access to, or the installation will fail. Defaults to |`openshift_master_default_subdomain` |This variable overrides the default subdomain to use for exposed -xref:../../architecture/core_concepts/routes.adoc#architecture-core-concepts-routes[routes]. +xref:../../architecture/networking/routes.adoc#architecture-core-concepts-routes[routes]. |`openshift_master_image_policy_config` |Sets `imagePolicyConfig` in the master configuration. See xref:../../install_config/master_node_configuration.adoc#master-config-image-config[Image Configuration] for details. @@ -273,7 +326,7 @@ xref:../../architecture/core_concepts/pods_and_services.adoc#service-proxy-mode[ proxy mode] to use: either `iptables` for the default, pure-`iptables` implementation, or `userspace` for the user space proxy. -|`openshift_router_selector` +|`openshift_hosted_router_selector` |Default node selector for automatically deploying router pods. See xref:configuring-node-host-labels[Configuring Node Host Labels] for details. @@ -287,7 +340,7 @@ when placing pods. |`osm_cluster_network_cidr` | This variable overrides the -xref:../../architecture/additional_concepts/sdn.adoc#sdn-design-on-masters[SDN +xref:../../architecture/networking/sdn.adoc#sdn-design-on-masters[SDN cluster network] CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the @@ -299,7 +352,7 @@ master configuration]. |`osm_host_subnet_length` |This variable specifies the size of the per host subnet allocated for pod IPs by -xref:../../architecture/additional_concepts/sdn.adoc#sdn-design-on-masters[{product-title} +xref:../../architecture/networking/sdn.adoc#sdn-design-on-masters[{product-title} SDN]. Defaults to `9` which means that a subnet of size /23 is allocated to each host; for example, given the default 10.128.0.0/14 cluster network, this will allocate 10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, and so on. This cannot be @@ -325,13 +378,17 @@ the *docker* configuration. For any of these registries, secure sockets layer *docker* configuration. Block the listed registries. Setting this to `all` blocks everything not in the other variables. -|`openshift_hosted_metrics_public_url` +|`openshift_metrics_hawkular_hostname` |This variable sets the host name for integration with the metrics console by overriding `metricsPublicURL` in the master configuration for cluster metrics. If you alter this variable, ensure the host name is accessible via your router. See xref:advanced-install-cluster-metrics[Configuring Cluster Metrics] for details. +|`openshift_template_service_broker_namespaces` +|This variable enables the template service broker by specifying one or more +namespaces whose templates will be served by the broker. + |`openshift_image_tag` |Use this variable to specify a container image tag to install or configure. @@ -448,18 +505,12 @@ invalid if used. These values override other settings in node configuration which may cause invalid configurations. Example usage: *{'image-gc-high-threshold': ['90'],'image-gc-low-threshold': ['80']}*. -|`openshift_hosted_router_selector` -|Default node selector for automatically deploying router pods. See -xref:configuring-node-host-labels[Configuring Node Host Labels] for details. - -|`openshift_registry_selector` -|Default node selector for automatically deploying registry pods. See -xref:configuring-node-host-labels[Configuring Node Host Labels] for details. - -|`*openshift_docker_options*` -|This variable configures additional Docker options within *_/etc/sysconfig/docker_*, such as options used in xref:../../install_config/install/host_preparation.adoc#managing-docker-container-logs[Managing Container Logs]. - -Example usage: *"--log-driver json-file --log-opt max-size=1M --log-opt max-file=3"*. +|`openshift_docker_options` +|This variable configures additional `docker` options within +*_/etc/sysconfig/docker_*, such as options used in +xref:../../install_config/install/host_preparation.adoc#managing-docker-container-logs[Managing Container Logs]. Example usage: *"--log-driver json-file --log-opt max-size=1M +--log-opt max-file=3"*. Do not use when +xref:advanced-install-docker-system-container[running `docker` as a system container]. |`openshift_schedulable` |This variable configures whether the host is marked as a schedulable node, @@ -577,7 +628,14 @@ inventory file. For example: openshift_disable_check=memory_availability,disk_availability ---- -A similar set of checks meant to run for diagnostic on existing clusters can be xref:../../admin_guide/diagnostics_tool.adoc#admin-guide-diagnostics-tool[Additional Diagnostic Checks via Ansible]. Another set of checks for checking certificate expiration can be found in xref:../../install_config/redeploying_certificates.adoc#install-config-redeploying-certificates[Redeploying Certificates]. +[NOTE] +==== +A similar set of health checks meant to run for diagnostics on existing clusters +can be found in +xref:../../admin_guide/diagnostics_tool.adoc#admin-guide-health-checks-via-ansible-playbook[Ansible-based Health Checks]. Another set of checks for checking certificate expiration can be +found in +xref:../../install_config/redeploying_certificates.adoc#install-config-redeploying-certificates[Redeploying Certificates]. +==== [[advanced-install-configuring-system-containers]] === Configuring System Containers @@ -716,8 +774,9 @@ If you are using an image registry other than the default at *_/etc/ansible/hosts_* file. ---- -oreg_url=example.com/openshift3/ose-${component}:${version} +oreg_url={registry}/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true +openshift_docker_additional_registries={registry} ---- .Registry Variables @@ -731,8 +790,17 @@ openshift_examples_modify_imagestreams=true |`*openshift_examples_modify_imagestreams*` |Set to `true` if pointing to a registry other than the default. Modifies the image stream location to the value of `*oreg_url*`. +|`*openshift_docker_additional_registries*` +|Specify the additional registry or registries. |=== +For example: +---- +oreg_url=example.com/openshift3/ose-${component}:${version} +openshift_examples_modify_imagestreams=true +openshift_docker_additional_registries=example.com +---- + [[advanced-install-registry-storage]] ==== Configuring Registry Storage @@ -819,6 +887,241 @@ region endpoint parameter: openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/ ---- + +[[advanced-install-glusterfs-persistent-storage]] +=== Configuring GlusterFS Persistent Storage + +GlusterFS can be configured to provide +xref:../../architecture/additional_concepts/storage.adoc#architecture-additional-concepts-storage[peristent storage] and +xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[dynamic provisioning] for {product-title}. It can be used both containerized within +{product-title} and non-containerized on its own nodes. + +[[advanced-install-containerized-glusterfs-persistent-storage]] +==== Configuring Containerized GlusterFS Persistent Storage + +ifdef::openshift-enterprise[] +This option utilizes +link:https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/[Red Hat Container Native Storage (CNS)] for configuring containerized GlusterFS persistent storage in {product-title}. +endif::[] +ifdef::openshift-origin[] +See link:https://github.com/gluster/gluster-kubernetes[Running Containerized GlusterFS in Kubernetes] for additional information on containerized storage +using GlusterFS. +endif::[] + +[IMPORTANT] +==== +See +xref:../../install_config/install/prerequisites.adoc#prereq-containerized-glusterfs-considerations[Containerized GlusterFS Considerations] for specific host preparations and prerequisites. +==== + +. In your inventory file, add `glusterfs` in the `[OSEv3:children]` section to +enable the `[glusterfs]` group: ++ +---- +[OSEv3:children] +masters +nodes +glusterfs +---- + +. (Optional) Include any of the following role variables in the `[OSEv3:vars]` +section you wish to change: ++ +---- +[OSEv3:vars] +openshift_storage_glusterfs_namespace=glusterfs <1> +openshift_storage_glusterfs_name=storage <2> +---- +<1> The project (namespace) to host the storage pods. Defaults to `glusterfs`. +<2> A name to identify the GlusterFS cluster, which will be used in resource names. +Defaults to `storage`. + +. Add a `[glusterfs]` section with entries for each storage node that will host +the GlusterFS storage and include the `glusterfs_ip` and +`glusterfs_devices` parameters in the form: ++ +---- + glusterfs_ip= glusterfs_devices='[ "", "", ... ]' +---- ++ +For example: ++ +---- +[glusterfs] +192.168.10.11 glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +192.168.10.12 glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +192.168.10.13 glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +---- ++ +Set `glusterfs_devices` to a list of raw block devices that will be completely +managed as part of a GlusterFS cluster. There must be at least one device +listed. Each device must be bare, with no partitions or LVM PVs. Set +`glusterfs_ip` to the IP address that will be used by pods to communicate with +the GlusterFS node. + +. Add the hosts listed under `[glusterfs]` to the `[nodes]` group as well: ++ +---- +[nodes] +192.168.10.14 +192.168.10.15 +192.168.10.16 +---- + +. After completing the cluster installation per +xref:running-the-advanced-installation[Running the Advanced Installation], run +the following from a master to verify the necessary objects were successfully +created: + +.. Verfiy that the GlusterFS `StorageClass` was created: ++ +---- +# oc get storageclass +NAME TYPE +glusterfs-storage kubernetes.io/glusterfs +---- + +.. Verify that the route was created: ++ +---- +# oc get routes +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +heketi-glusterfs-route heketi-glusterfs-default.cloudapps.example.com heketi-glusterfs None +---- ++ +[NOTE] +==== +The name for the route will be `heketi-glusterfs-route` unless the default +`glusterfs` value was overridden using the `openshift_glusterfs_storage_name` +variable in the inventory file. +==== + +.. Use `curl` to verify the route works correctly: ++ +---- +# curl http://heketi-glusterfs-default.cloudapps.example.com/hello +Hello from Heketi. +---- + +After successful installation, see +link:https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/chap-documentation-red_hat_gluster_storage_container_native_with_openshift_platform-gluster_pod_operations[Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment] to check the status of the GlusterFS clusters. + +xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Dynamic provisioning] of GlusterFS volumes can occur by +xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#create-a-pvc-ro-request-storage-for-your-application[creating a PVC to request storage]. + +[[advanced-install-configuring-openshift-container-registry]] +=== Configuring the OpenShift Container Registry + +Additional configuration options are available at installation time for the +xref:../../architecture/infrastructure_components/image_registry.adoc#integrated-openshift-registry[OpenShift Container Registry]. + +If no registry storage options are used, the default {product-title} registry is +ephermal and all data will be lost if the pod no longer exists. {product-title} +also supports a single node NFS-backed registry, but this option lacks +redundancy and reliability compared with the GlusterFS-backed option. + +[[advanced-install-containerized-glusterfs-backed-registry]] +==== Configuring a Containerized GlusterFS-Backed Registry + +Similar to +xref:advanced-install-containerized-glusterfs-persistent-storage[configuring containerized GlusterFS for persistent storage], GlusterFS storage can be +configured and deployed for an OpenShift Container Registry during the initial +installation of the cluster to offer redundant and more reliable storage for the +registry. + +[IMPORTANT] +==== +See +xref:../../install_config/install/prerequisites.adoc#prereq-containerized-glusterfs-considerations[Containerized +GlusterFS Considerations] for specific host preparations and prerequisites. +==== + +Configuration of storage for an OpenShift Container Registry is very similar to +configuration for GlusterFS persistent storage in that it can be either +containerized or non-containerized. For this containerized method, the following +exceptions and additions apply: + +. In your inventory file, add `glusterfs_registry` in the `[OSEv3:children]` section +to enable the `[glusterfs_registry]` group: ++ +---- +[OSEv3:children] +masters +nodes +glusterfs_registry +---- + +. Add the following role variable in the `[OSEv3:vars]` section to enable the +GlusterFS-backed registry, provided that the `glusterfs_registry` group name and +the `[glusterfs_registry]` group exist: ++ +---- +[OSEv3:vars] +openshift_hosted_registry_storage_kind=glusterfs +---- + +. It is recommended to have at least three registry pods, so set the following +role variable in the `[OSEv3:vars]` section: ++ +---- +openshift_hosted_registry_replicas=3 +---- + +. If you want to specify the volume size for the GlusterFS-backed registry, set +the following role variable in `[OSEv3:vars]` section: ++ +---- +openshift_hosted_registry_storage_volume_size=10Gi +---- ++ +If unspecified, the volume size defaults to `5Gi`. + +. The installer will deploy the OpenShift Container Registry pods and associated +routers on nodes containing the `region=infra` label. Add this label on at least +one node entry in the `[nodes]` section, otherwise the registry deployment will +fail. For example: ++ +---- +[nodes] +192.168.10.14 openshift_schedulable=True openshift_node_labels="{'region': 'infra'}" +---- + +. Add a `[glusterfs_registry]` section with entries for each storage node that +will host the GlusterFS-backed registry and include the `glusterfs_ip` and +`glusterfs_devices` parameters in the form: ++ +---- + glusterfs_ip= glusterfs_devices='[ "", "", ... ]' +---- ++ +For example: ++ +---- +[glusterfs_registry] +192.168.10.14 glusterfs_ip=192.168.10.14 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +192.168.10.15 glusterfs_ip=192.168.10.15 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +192.168.10.16 glusterfs_ip=192.168.10.16 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +---- ++ +Set `glusterfs_devices` to a list of raw block devices that will be completely +managed as part of a GlusterFS cluster. There must be at least one device +listed. Each device must be bare, with no partitions or LVM PVs. Set +`glusterfs_ip` to the IP address that will be used by pods to communicate with +the GlusterFS node. + +. Add the hosts listed under `[glusterfs_registry]` to the `[nodes]` group as well: ++ +---- +[nodes] +192.168.10.14 +192.168.10.15 +192.168.10.16 +---- + +After successful installation, see +link:https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/chap-documentation-red_hat_gluster_storage_container_native_with_openshift_platform-gluster_pod_operations[Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment] to check the +status of the GlusterFS clusters. + [[advanced-install-configuring-global-proxy]] === Configuring Global Proxy Options @@ -899,6 +1202,26 @@ value; you only need to set this if you want your `git clone` operations to use a different value. |=== +[[advanced-install-no-proxy-list]] +If any of: + +- `openshift_no_proxy` +- `openshift_https_proxy` +- `openshift_http_proxy` + +are set, then all cluster hosts will have an automatically generated `NO_PROXY` +environment variable injected into several service configuration scripts. The +default `.svc` domain and your cluster's `dns_domain` (typically +`.cluster.local`) will also be added. + +[NOTE] +==== +Setting `openshift_generate_no_proxy_hosts` to `false` in your inventory will +not disable the automatic addition of the `.svc` domain and the cluster domain. +These are required and added automatically if any of the above listed proxy +parameters are set. +==== + ifdef::openshift-enterprise,openshift-origin[] [[advanced-install-configuring-firewalls]] === Configuring the Firewall @@ -938,7 +1261,7 @@ endif::[] Any hosts you designate as masters during the installation process should also be configured as nodes so that the masters are configured as part of the -xref:../../architecture/additional_concepts/networking.adoc#openshift-sdn[OpenShift SDN]. You must do so by adding entries for these hosts to the `[nodes]` section: +xref:../../architecture/networking/network_plugins.adoc#openshift-sdn[OpenShift SDN]. You must do so by adding entries for these hosts to the `[nodes]` section: ---- [nodes] @@ -969,7 +1292,7 @@ You can assign xref:../../architecture/core_concepts/pods_and_services.adoc#labels[labels] to node hosts during the Ansible install by configuring the *_/etc/ansible/hosts_* file. Labels are useful for determining the placement of pods onto nodes using -the xref:../../admin_guide/scheduler.adoc#configurable-predicates[scheduler]. +the xref:../../admin_guide/scheduling/scheduler.adoc#configurable-predicates[scheduler]. Other than `region=infra` (discussed in xref:configuring-dedicated-infrastructure-nodes[Configuring Dedicated Infrastructure Nodes]), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster's requirements. @@ -1173,17 +1496,12 @@ following to enable cluster metrics when using the advanced install: ---- [OSEv3:vars] -openshift_hosted_metrics_deploy=true <1> -openshift_hosted_metrics_deployer_prefix=registry.example.com:8888/openshift3/ <2> -openshift_hosted_metrics_deployer_version=v3.5 <3> +openshift_metrics_install_metrics=true ---- -<1> Enables the metrics deployment. -<2> Replace `registry.example.com:8888/openshift3/` with the prefix for the component images. -<3> Replace with the desired image version. The {product-title} web console uses the data coming from the Hawkular Metrics service to display its graphs. The metrics public URL can be set during cluster -installation using the `openshift_hosted_metrics_public_url` Ansible variable, +installation using the `openshift_metrics_hawkular_hostname` Ansible variable, which defaults to: `\https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics` @@ -1214,12 +1532,12 @@ be *_/exports/metrics_*: ---- [OSEv3:vars] -openshift_hosted_metrics_storage_kind=nfs -openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_metrics_storage_nfs_directory=/exports -openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)' -openshift_hosted_metrics_storage_volume_name=metrics -openshift_hosted_metrics_storage_volume_size=10Gi +openshift_metrics_storage_kind=nfs +openshift_metrics_storage_access_modes=['ReadWriteOnce'] +openshift_metrics_storage_nfs_directory=/exports +openshift_metrics_storage_nfs_options='*(rw,root_squash)' +openshift_metrics_storage_volume_name=metrics +openshift_metrics_storage_volume_size=10Gi ---- [discrete] @@ -1232,12 +1550,12 @@ To use an external NFS volume, one must already exist with a path of ---- [OSEv3:vars] -openshift_hosted_metrics_storage_kind=nfs -openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_metrics_storage_host=nfs.example.com -openshift_hosted_metrics_storage_nfs_directory=/exports -openshift_hosted_metrics_storage_volume_name=metrics -openshift_hosted_metrics_storage_volume_size=10Gi +openshift_metrics_storage_kind=nfs +openshift_metrics_storage_access_modes=['ReadWriteOnce'] +openshift_metrics_storage_host=nfs.example.com +openshift_metrics_storage_nfs_directory=/exports +openshift_metrics_storage_volume_name=metrics +openshift_metrics_storage_volume_size=10Gi ---- The remote volume path using the following options would be @@ -1265,19 +1583,14 @@ following to enable cluster logging when using the advanced installation method: ---- [OSEv3:vars] -openshift_hosted_logging_deploy=true <1> -openshift_hosted_logging_deployer_prefix=registry.example.com:8888/openshift3/ <2> -openshift_hosted_logging_deployer_version=v3.5 <3> +openshift_logging_install_logging=true ---- -<1> Enables the logging stack. -<2> Replace `registry.example.com:8888/openshift3/` with your desired prefix. -<3> Replace with the desired image version. [[advanced-installation-logging-storage]] ==== Configuring Logging Storage -The `openshift_hosted_logging_storage_kind` variable must be set in order to use -persistent storage for logging. If `openshift_hosted_logging_storage_kind` is +The `openshift_logging_storage_kind` variable must be set in order to use +persistent storage for logging. If `openshift_logging_storage_kind` is not set, then cluster logging data is stored in an `emptyDir` volume, which will be deleted when the Elasticsearch pod terminates. @@ -1296,12 +1609,12 @@ the `[nfs]` host group. For example, the volume path using these options would b ---- [OSEv3:vars] -openshift_hosted_logging_storage_kind=nfs -openshift_hosted_logging_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_logging_storage_nfs_directory=/exports -openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)' -openshift_hosted_logging_storage_volume_name=logging -openshift_hosted_logging_storage_volume_size=10Gi +openshift_logging_storage_kind=nfs +openshift_logging_storage_access_modes=['ReadWriteOnce'] +openshift_logging_storage_nfs_directory=/exports +openshift_logging_storage_nfs_options='*(rw,root_squash)' +openshift_logging_storage_volume_name=logging +openshift_logging_storage_volume_size=10Gi ---- [discrete] @@ -1314,12 +1627,12 @@ To use an external NFS volume, one must already exist with a path of ---- [OSEv3:vars] -openshift_hosted_logging_storage_kind=nfs -openshift_hosted_logging_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_logging_storage_host=nfs.example.com -openshift_hosted_logging_storage_nfs_directory=/exports -openshift_hosted_logging_storage_volume_name=logging -openshift_hosted_logging_storage_volume_size=10Gi +openshift_logging_storage_kind=nfs +openshift_logging_storage_access_modes=['ReadWriteOnce'] +openshift_logging_storage_host=nfs.example.com +openshift_logging_storage_nfs_directory=/exports +openshift_logging_storage_volume_name=logging +openshift_logging_storage_volume_size=10Gi ---- The remote volume path using the following options would be @@ -1335,9 +1648,218 @@ xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#i ---- [OSEv3:vars] -openshift_hosted_logging_storage_kind=dynamic +openshift_logging_storage_kind=dynamic +---- + +[[enabling-service-catalog]] +=== Enabling the Service Catalog + +[NOTE] +==== +Enabling the service catalog is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +Enabling the +xref:../../architecture/service_catalog/index.adoc#architecture-additional-concepts-service-catalog[service catalog] allows service brokers to be registered with the catalog. The web +console is also configured to enable an updated landing page for browsing the +catalog. + +To enable the service catalog, add the following in your inventory file's +`[OSEv3:vars]` section: + +---- +openshift_enable_service_catalog=true +ifdef::openshift-origin[] +openshift_service_catalog_image_prefix=openshift/origin- +openshift_service_catalog_image_version=latest +endif::[] +---- + +When the service catalog is enabled, the web console shows the updated landing +page but still uses the normal image stream and template behavior. The Ansible +service broker is also enabled; see +xref:configuring-ansible-service-broker[Configuring the Ansible Service Broker] +for more details. The template service broker (TSB) is not deployed by default; +see xref:configuring-template-service-broker[Configuring the Template Service Broker] for more information. + +[[configuring-ansible-service-broker]] +=== Configuring the Ansible Service Broker + +[NOTE] +==== +Enabling the Ansible service broker is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +If you have xref:enabling-service-catalog[enabled the service catalog], the +xref:../../architecture/service_catalog/ansible_service_broker.adoc#arch-ansible-service-broker[Ansible service broker] (ASB) is also enabled. + +The ASB deploys its own etcd instance separate from the etcd used by the rest of +the {product-title} cluster. The ASB's etcd instance requires separate storage +using persistent volumes (PVs) to function. If no PV is available, etcd will +wait until the PV can be satisfied. The ASB application will enter a `CrashLoop` +state until its etcd instance is available. + +[NOTE] +==== +The following example shows usage of an NFS host to provide the required PVs, +but +xref:../../install_config/persistent_storage/index.adoc#install-config-persistent-storage-index[other persistent storage providers] can be used instead. +==== + +Some Ansible playbook bundles (APBs) may also require a PV for their own usage. +Two APBs are currently provided with {product-title} 3.6: MediaWiki and +PostgreSQL. Both of these require their own PV to deploy. + +To configure the ASB: + +. In your inventory file, add `nfs` to the `[OSEv3:children]` section to enable +the `[nfs]` group: ++ +---- +[OSEv3:children] +masters +nodes +nfs +---- + +. Add a `[nfs]` group section and add the host name for the system that will +be the NFS host: ++ +---- +[nfs] +master1.example.com +---- + +. In addition to the settings from xref:enabling-service-catalog[Enabling the +Service Catalog], add the following in the `[OSEv3:vars]` +section: ++ +---- +openshift_hosted_etcd_storage_kind=nfs +openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" +openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd <1> +openshift_hosted_etcd_storage_volume_name=etcd-vol2 <1> +openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] +openshift_hosted_etcd_storage_volume_size=1G +openshift_hosted_etcd_storage_labels={'storage': 'etcd'} + +ifdef::openshift-origin[] +ansible_service_broker_image_prefix=openshift/ +ansible_service_broker_registry_url="registry.access.redhat.com" +ansible_service_broker_registry_user= <2> +ansible_service_broker_registry_password= <2> +ansible_service_broker_registry_organization= <2> +endif::[] +---- +<1> An NFS volume will be created with path `/` on the +host within the `[nfs]` group. For example, the volume path using these options +would be *_/opt/osev3-etcd/etcd-vol2_*. +ifdef::openshift-origin[] +<2> Only required if `ansible_service_broker_registry_url` is set to a registry that +requires authentication for pulling APBs. +endif::[] ++ +These settings create a persistent volume that is attached to the ASB's etcd +instance during cluster installation. + +[[configuring-template-service-broker]] +=== Configuring the Template Service Broker + +[NOTE] +==== +Enabling the template service broker is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +If you have xref:enabling-service-catalog[enabled the service catalog], you can +also enable the +xref:../../architecture/service_catalog/template_service_broker.adoc#arch-template-service-broke[template service broker] (TSB). + +To configure the TSB: + +. One or more projects must be defined as the broker's source +namespace(s) for loading templates and image streams into the service catalog. +Set the desired projects by modifying the following in your inventory file's +`[OSEv3:vars]` section: ++ +---- +openshift_template_service_broker_namespaces=['openshift','myproject'] ---- +. The installer currently does not automate installation of the TSB, so additional +steps must be run manually after the cluster installation has completed. +Continue with the rest of the preparation of your inventory file, then see +xref:running-the-advanced-installation[Running the Advanced Installation] for +the additional steps to deploy the TSB. + +[[configuring-web-console-customization]] +=== Configuring Web Console Customization + +The following Ansible variables set master configuration options for customizing +the web console. See +xref:../../install_config/web_console_customization.adoc#install-config-web-console-customization[Customizing the Web Console] for more details on these customization options. + +.Web Console Customization Variables +[options="header"] +|=== + +|Variable |Purpose + +|`openshift_master_logout_url` +|Sets `logoutURL` in the master configuration. See xref:../../install_config/web_console_customization.adoc#changing-the-logout-url[Changing the Logout URL] for details. Example value: `\http://example.com` + +|`openshift_master_extension_scripts` +|Sets `extensionScripts` in the master configuration. See xref:../../install_config/web_console_customization.adoc#loading-custom-scripts-and-stylesheets[Loading Extension Scripts and Stylesheets] for details. Example value: `['/path/to/script1.js','/path/to/script2.js']` + +|`openshift_master_extension_stylesheets` +|Sets `extensionStylesheets` in the master configuration. See xref:../../install_config/web_console_customization.adoc#loading-custom-scripts-and-stylesheets[Loading Extension Scripts and Stylesheets] for details. Example value: `['/path/to/stylesheet1.css','/path/to/stylesheet2.css']` + +|`openshift_master_extensions` +|Sets `extensions` in the master configuration. See xref:../../install_config/web_console_customization.adoc#serving-static-files[Serving Static Files] and xref:../../install_config/web_console_customization.adoc#customizing-the-about-page[Customizing the About Page] for details. Example value: `[{'name': 'images', 'sourceDirectory': '/path/to/my_images'}]` + +|`openshift_master_oauth_template` +|Sets the OAuth template in the master configuration. See xref:../../install_config/web_console_customization.adoc#customizing-the-login-page[Customizing the Login Page] for details. Example value: `['/path/to/login-template.html']` + +|`openshift_master_metrics_public_url` +|Sets `metricsPublicURL` in the master configuration. See xref:../../install_config/cluster_metrics.adoc#install-setting-the-metrics-public-url[Setting the Metrics Public URL] for details. Example value: `\https://hawkular-metrics.example.com/hawkular/metrics` + +|`openshift_master_logging_public_url` +|Sets `loggingPublicURL` in the master configuration. See xref:../../install_config/aggregate_logging.adoc#aggregate-logging-kibana[Kibana] for details. Example value: `\https://kibana.example.com` + +|=== + [[adv-install-example-inventory-files]] == Example Inventory Files @@ -1345,7 +1867,7 @@ openshift_hosted_logging_storage_kind=dynamic === Single Master Examples You can configure an environment with a single master and multiple nodes, and -either a single embedded *etcd* or multiple external *etcd* hosts. +either a single or multiple number of external *etcd* hosts. [NOTE] ==== @@ -1358,7 +1880,7 @@ not supported. ==== Single Master and Multiple Nodes The following table describes an example environment for a single -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master] (with embedded *etcd*) +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master] (with *etcd* on the same host) and two xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[nodes]: @@ -1370,6 +1892,9 @@ xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc |*master.example.com* |Master and node +|*master.example.com* +|etcd + |*node1.example.com* .2+.^|Node @@ -1395,10 +1920,10 @@ ansible_ssh_user=root #ansible_become=true ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise endif::[] ifdef::openshift-origin[] -deployment_type=origin +openshift_deployment_type=origin endif::[] # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider @@ -1408,6 +1933,10 @@ endif::[] [masters] master.example.com +# host group for etcd +[etcd] +master.example.com + # host group for nodes, includes region info [nodes] master.example.com @@ -1474,10 +2003,10 @@ etcd [OSEv3:vars] ansible_ssh_user=root ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise endif::[] ifdef::openshift-origin[] -deployment_type=origin +openshift_deployment_type=origin endif::[] # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider @@ -1650,10 +2179,10 @@ lb [OSEv3:vars] ansible_ssh_user=root ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise endif::[] ifdef::openshift-origin[] -deployment_type=origin +openshift_deployment_type=origin endif::[] # Uncomment the following to enable htpasswd authentication; defaults to @@ -1756,7 +2285,7 @@ lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. @@ -1808,74 +2337,240 @@ specifications, and save it as *_/etc/ansible/hosts_*. [[running-the-advanced-installation]] == Running the Advanced Installation -After you have finished xref:configuring-ansible[configuring Ansible] by -defining your own inventory file in *_/etc/ansible/hosts_* or modifying one of -the xref:adv-install-example-inventory-files[example inventories], follow these -steps to run the advanced installation: - -// tag::BZ1466783-workaround-install[] -. If you are using a proxy, you must add the IP address of the etcd endpoints to -the `openshift_no_proxy` cluster variable in your inventory file. -+ -[NOTE] -==== -If you are not using a proxy, you can skip this step. -==== -+ -In {product-title} + +After you have xref:configuring-ansible[configured Ansible] by defining an +inventory file in *_/etc/ansible/hosts_*, you run the advanced installation +playbook via Ansible. {product-title} installations are currently supported +using the RPM-based installer, while the containerized installer is currently a +Technology Preview feature. + +[[running-the-advanced-installation-rpm]] +=== Running the RPM-based Installer + +The RPM-based installer uses Ansible installed via RPM packages to run playbooks +and configuration files available on the local host. To run the installer, use +the following command, specifying `-i` if your inventory file located somewhere +other than *_/etc/ansible/hosts_*: + +---- ifdef::openshift-enterprise[] -3.4, +# ansible-playbook [-i /path/to/inventory] \ + /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml endif::[] ifdef::openshift-origin[] -1.4, +# ansible-playbook [-i /path/to/inventory] \ + ~/openshift-ansible/playbooks/byo/config.yml endif::[] -the master connected to the etcd cluster using the host name of the etcd -endpoints. In {product-title} +---- + +If for any reason the installation fails, before re-running the installer, see +xref:installer-known-issues[Known Issues] to check for any specific +instructions or workarounds. + +[[running-the-advanced-installation-system-container]] +=== Running the Containerized Installer + +include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview] + +The ifdef::openshift-enterprise[] -3.5, +*openshift3/ose-ansible* endif::[] ifdef::openshift-origin[] -1.5, +*openshift/origin-ansible* endif::[] -the master now connects to etcd via IP address. -+ -When configuring a cluster to use proxy settings (see -xref:advanced-install-configuring-global-proxy[Configuring Global Proxy Options]), this change causes the master-to-etcd connection to be proxied as -well, rather than being excluded by host name in each host's `NO_PROXY` setting -(see -xref:../../install_config/http_proxies.adoc#install-config-http-proxies[Working with HTTP Proxies] for more about `NO_PROXY`). -+ -To workaround this issue, set the following: +image is a containerized version of the {product-title} installer that runs as a +link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container]. System containers are stored and run outside of the traditional +*docker* service. Functionally, using the containerized installer is the same as +using the traditional RPM-based installer, except it is running in a +containerized environment instead of directly on the host. + +. Use the Docker CLI to pull the image locally: + ---- -openshift_no_proxy=https://: +ifdef::openshift-enterprise[] +$ docker pull registry.access.redhat.com/openshift3/ose-ansible:v3.6 +endif::[] +ifdef::openshift-origin[] +$ docker pull docker.io/openshift/origin-ansible:v3.6 +endif::[] ---- + +. The installer system container must be stored in +link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree] +instead of defaulting to *docker* daemon storage. Use the Atomic CLI to import +the installer image from the local *docker* engine to OSTree storage: + -Use the IP that the master will use to contact the etcd cluster as the -``. The `` should be `2379` if you are using standalone etcd -(clustered) or `4001` for embedded etcd (single master, non-clustered etcd). The -installer will be updated in a future release to handle this scenario -automatically during installation and upgrades -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1466783[*BZ#1466783*]). -// end::BZ1466783-workaround-install[] - -. Run the advanced installation using the following playbook: +---- +$ atomic pull --storage ostree \ +ifdef::openshift-enterprise[] + docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 +endif::[] +ifdef::openshift-origin[] + docker:docker.io/openshift/origin-ansible:v3.6 +endif::[] +---- + +. Install the system container so it is set up as a systemd service: + ---- +$ atomic install --system \ + --storage=ostree \ + --name=openshift-installer \//<1> + --set INVENTORY_FILE=/path/to/inventory \//<2> ifdef::openshift-enterprise[] -# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml + docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 endif::[] ifdef::openshift-origin[] -# ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml + docker:docker.io/openshift/origin-ansible:v3.6 endif::[] ---- +<1> Sets the name for the systemd service. +<2> Specify the location for your inventory file on your local workstation. + +. Use the `systemctl` command to start the installer service as you would any +other systemd service. This command initiates the cluster installation: ++ +---- +$ systemctl start openshift-installer +---- + If for any reason the installation fails, before re-running the installer, see xref:installer-known-issues[Known Issues] to check for any specific instructions or workarounds. -. After the installation succeeds, continue to -xref:advanced-verifying-the-installation[Verifying the Installation]. +. After the installation completes, you can uninstall the system container if you want. However, if you need to run the installer again to run any other playbooks later, you would have to follow this procedure again. ++ +To uninstall the system container: ++ +---- +$ atomic uninstall openshift-installer +---- + +[[running-the-advanced-installation-system-container-other-playbooks]] +==== Running Other Playbooks + +After you have completed the cluster installation, if you want to later run any +other playbooks using the containerized installer (for example, cluster upgrade +playbooks), you can use the `PLAYBOOK_FILE` environment variable. The default +value is `playbooks/byo/config.yml`, which is the main cluster installation +playbook, but you can set it to the path of another playbook inside the +container. + +For example: + +---- +$ atomic install --system \ + --storage=ostree \ + --name=openshift-installer \ + --set INVENTORY_FILE=/etc/ansible/hosts \ + --set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml \//<1> +ifdef::openshift-enterprise[] + docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 +endif::[] +ifdef::openshift-origin[] + docker:docker.io/openshift/origin-ansible:v3.6 +endif::[] +---- +<1> Set `PLAYBOOK_FILE` to the relative path of the playbook starting at the +*_playbooks/_* directory. Playbooks mentioned elsewhere in {product-title} +documentation assume use of the RPM-based installer, so use this relative path +instead when using the containerized installer. + +[[running-the-advanced-installation-tsb]] +=== Deploying the Template Service Broker + +If you have xref:enabling-service-catalog[enabled the service catalog] and want +to deploy the xref:configuring-template-service-broker[template service broker] +(TSB), run the following manual steps after the cluster installation completes +successfully: + +[NOTE] +==== +The template service broker is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +[WARNING] +==== +Enabling the TSB currently requires opening unauthenticated access to the +cluster; this security issue will be resolved before exiting the Technology +Preview phase. +==== + +. Ensure that one or more source projects for the TSB were defined via +`openshift_template_service_broker_namespaces` as described in +xref:../../install_config/install/advanced_install.adoc#configuring-template-service-broker[Configuring the Template Service Broker]. + +. Run the following command to enable unauthenticated access for the TSB: ++ +---- +$ oc adm policy add-cluster-role-to-group \ + system:openshift:templateservicebroker-client \ + system:unauthenticated system:authenticated +---- + +. Create a *_template-broker.yml_* file with the following contents: ++ +[source,yaml] +---- +apiVersion: servicecatalog.k8s.io/v1alpha1 +kind: Broker +metadata: + name: template-broker +spec: + url: https://kubernetes.default.svc:443/brokers/template.openshift.io +---- + +. Use the file to register the broker: ++ +---- +$ oc create -f template-broker.yml +---- + +. Enable the Technology Preview feature in the web console to use the TSB instead +of the standard `openshift` global library behavior. + +.. Save the following script to a file (for example, *_tech-preview.js_*): ++ +[source, javascript] +---- +window.OPENSHIFT_CONSTANTS.ENABLE_TECH_PREVIEW_FEATURE.template_service_broker = true; +---- + +.. Add the file to the master configuration file in +*_/etc/origin/master/master-config.yml_*: ++ +[source, yaml] +---- +assetConfig: + ... + extensionScripts: + - /path/to/tech-preview.js +---- + +.. Restart the master service: ++ +ifdef::openshift-origin[] +---- +# systemctl restart origin-master +---- +endif::[] +ifdef::openshift-enterprise[] +---- +# systemctl restart atomic-openshift-master +---- +endif::[] [[advanced-verifying-the-installation]] == Verifying the Installation @@ -1902,12 +2597,6 @@ and the web console port number to access the web console with a web browser. For example, for a master host with a host name of `master.openshift.com` and using the default port of `8443`, the web console would be found at `\https://master.openshift.com:8443/console`. -. Now that the install has been verified, run the following command on each master and node host to add the *atomic-openshift* packages back to the list of yum excludes on the host: - + - ---- - # atomic-openshift-excluder exclude - ---- - // end::verifying-the-installation[] [NOTE] @@ -2023,10 +2712,10 @@ nodes <1> [OSEv3:vars] ansible_ssh_user=root ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise endif::[] ifdef::openshift-origin[] -deployment_type=origin +openshift_deployment_type=origin endif::[] [nodes]