Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update 3.11 for openshift start / hyperkube changes #18691

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
131 changes: 50 additions & 81 deletions install_config/master_node_configuration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,16 @@ toc::[]

== Customizing master and node configuration after installation

The `openshift start` command and its subcommands (`master` to launch a
xref:../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master
server] and `node` to launch a
xref:../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[node
server]) take a limited set of arguments that are sufficient for launching
servers in a development or experimental environment.

However, these arguments are insufficient to describe and control the full set
of configuration and security options that are necessary in a production
environment. You must provide those options in the xref:../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[Master
host files], at *_/etc/origin/master/master-config.yaml_*
and the xref:../admin_guide/manage_nodes.adoc#modifying-nodes[node configuration maps]:

The `openshift start` command (for master servers) and `hyperkube` command (for
node servers) take a limited set of arguments that are sufficient for launching
servers in a development or experimental environment. However, these arguments
are insufficient to describe and control the full set of configuration and
security options that are necessary in a production environment.

You must provide these options in the
xref:../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master configuration file],
at *_/etc/origin/master/master-config.yaml_*, and the
xref:../admin_guide/manage_nodes.adoc#modifying-nodes[node configuration maps].
These files define options including overriding the default plug-ins, connecting
to etcd, automatically creating service accounts, building image names,
customizing project requests, configuring volume plug-ins, and much more.
Expand Down Expand Up @@ -76,7 +73,7 @@ in the configuration files] themselves.
[NOTE]
====
To modify a node in your cluster, update the xref:../admin_guide/manage_nodes.adoc#modifying-nodes[node configuration maps] as needed.
Do not manually edit the `node-config.yaml` file.
Do not manually edit the *_node-config.yaml_* file.
====

endif::openshift-origin[]
Expand Down Expand Up @@ -290,7 +287,7 @@ xref:../install_config/master_node_configuration.adoc#node-configuration-files[n
[NOTE]
====
To modify a node in your cluster, update the xref:../admin_guide/manage_nodes.adoc#modifying-nodes[node configuration maps] as needed.
Do not manually edit the `node-config.yaml` file.
Do not manually edit the *_node-config.yaml_* file.
====

[[master-configuration-files]]
Expand Down Expand Up @@ -1543,15 +1540,13 @@ starting with 1.9, the corruption issue is resolved and it is safe to switch to
parallel pulls.
====

====
[source,yaml]
----
kubeletArguments:
serialize-image-pulls:
- "false" <1>
----
<1> Change to true to disable parallel pulls. (This is the default config)
====
<1> Change to `true` to disable parallel pulls. This is the default configuration.

[[master-node-configuration-passwords-and-other-data]]
== Passwords and Other Sensitive Data
Expand All @@ -1565,31 +1560,27 @@ or in encrypted files.
.Environment Variable Example
[source,yaml]
----
...
bindPassword:
env: BIND_PASSWORD_ENV_VAR_NAME
----

.External File Example
[source,yaml]
----
...
bindPassword:
file: bindPassword.txt
----

.Encrypted External File Example
[source,yaml]
----
...
bindPassword:
file: bindPassword.encrypted
keyFile: bindPassword.key
----

To create the encrypted file and key file for the above example:

[options="nowrap"]
----
$ oc adm ca encrypt --genkey=bindPassword.key --out=bindPassword.encrypted
> Data to encrypt: B1ndPass0rd!
Expand Down Expand Up @@ -1627,23 +1618,20 @@ is recommended to not make them greater than these values.
To create configuration files for an all-in-one server (a master and a node on
the same host) in the specified directory:

[options="nowrap"]
----
$ openshift start --write-config=/openshift.local.config
----

To create a xref:master-configuration-files[master configuration file] and
other required files in the specified directory:

[options="nowrap"]
----
$ openshift start master --write-config=/openshift.local.config/master
----

To create a xref:node-configuration-files[node configuration file] and other
related files in the specified directory:

[options="nowrap"]
----
$ oc adm create-node-config \
--node-dir=/openshift.local.config/node-<node_hostname> \
Expand All @@ -1661,53 +1649,58 @@ comma-delimited list of every host name or IP address you want server
certificates to be valid for.

[[launching-servers-using-configuration-files]]

== Launching Servers Using Configuration Files
Once you have modified the master and/or node configuration files to your

After you have modified the master and node configuration files to your
specifications, you can use them when launching servers by specifying them as an
argument. Keep in mind that if you specify a configuration file, none of the
other command line options you pass are respected.
argument. If you specify a configuration file, none of the other command line
options you pass are respected.

[NOTE]
====
To modify a node in your cluster, update the xref:../admin_guide/manage_nodes.adoc#modifying-nodes[node configuration maps] as needed.
Do not manually edit the `node-config.yaml` file.
Do not manually edit the *_node-config.yaml_* file.
====

To launch an all-in-one server using a master configuration and a node
configuration file:

[options="nowrap"]
. Launch a master server using a master configuration file:
+
----
$ openshift start --master-config=/openshift.local.config/master/master-config.yaml --node-config=/openshift.local.config/node-<node_hostname>/node-config.yaml
$ openshift start master \
--config=/openshift.local.config/master/master-config.yaml
----

To launch a master server using a master configuration file:

[options="nowrap"]
. Start the network proxy and SDN plug-ins using a node configuration file and a
*_node.kubeconfig_* file:
+
----
$ openshift start master --config=/openshift.local.config/master/master-config.yaml
$ openshift start network \
--config=/openshift.local.config/node-<node_hostname>/node-config.yaml \
--kubeconfig=/openshift.local.config/node-<node_hostname>/node.kubeconfig
----

To launch a node server using a node configuration file:

[options="nowrap"]
. Launch a node server using a node configuration file:
+
----
$ openshift start node --config=/openshift.local.config/node-<node_hostname>/node-config.yaml
$ hyperkube kubelet \
$(/usr/bin/openshift-node-config \
--config=/openshift.local.config/node-<node_hostname>/node-config.yaml)
----

[[master-node-view-logs]]
== Viewing Master and Node Logs

{product-title} collects log messages for debugging, using the `systemd-journald.service` for nodes and a script, called `master-logs`, for masters.
{product-title} collects log messages for debugging, using the
`systemd-journald.service` for nodes and a script, called `master-logs`, for
masters.

[NOTE]
====
The number of lines displayed in the web console is hard-coded at 5000 and cannot be changed.
To see the entire log, use the CLI.
The number of lines displayed in the web console is hard-coded at 5000 and
cannot be changed. To see the entire log, use the CLI.
====

The logging uses five log message severities based on Kubernetes logging conventions, as follows:
The logging uses five log message severities based on Kubernetes logging
conventions, as follows:

.Log Level Options
[cols="3a,6a",options="header"]
Expand Down Expand Up @@ -1761,11 +1754,17 @@ master-logs api api 2> file
[[master-node-config-logging-levels]]
=== Configuring Logging Levels

You can control which INFO messages are logged by setting the `DEBUG_LOGLEVEL` option in the in xref:../admin_guide/manage_nodes.adoc#modifying-nodes[node configuration files] or the */etc/origin/master/master.env* file. Configuring the logs to collect all messages can lead to large logs that are difficult to interpret and can take up excessive space. Only collect all messages when you need to debug your cluster.
You can control which INFO messages are logged by setting the `DEBUG_LOGLEVEL`
option in the in xref:../admin_guide/manage_nodes.adoc#modifying-nodes[node configuration files]
or the *_/etc/origin/master/master.env_* file. Configuring the logs to collect
all messages can lead to large logs that are difficult to interpret and can take
up excessive space. Only collect all messages when you need to debug your
cluster.

[NOTE]
====
Messages with FATAL, ERROR, WARNING, and some INFO severities appear in the logs regardless of the log configuration.
Messages with FATAL, ERROR, WARNING, and some INFO severities appear in the logs
regardless of the log configuration.
====

To change the logging level:
Expand All @@ -1789,37 +1788,8 @@ process. For more information, see
xref:../install/configuring_inventory_file.adoc#cluster-variables-table[Cluster Variables].
====

The following examples are excerpts of redirected master log files at various log levels. System information has been removed from these examples.

////
.Excerpt of `master-logs api api 2> file` output at loglevel=0

----
3147 prober.go:111] Liveness probe for "master-etcd-master-0._kube-system(0e353004ca19506d4e19e815e432af88):etcd" failed (failure): member a181e25567e482b6 is unhealthy: got unhealthy result from https://10.10.94.10:2379
3147 prober.go:111] Readiness probe for "apiserver-hrzk4_openshift-template-service-broker(9445fef7-ce3d-11e8-9f42-fa163e71c6be):c" failed (failure): Get https://10.128.0.13:8443/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
3147 kubelet.go:1914] SyncLoop (PLEG): "controller-manager-r5w6x_kube-service-catalog(71b8d743-ce3d-11e8-9f42-fa163e71c6be)", event: &pleg.PodLifecycleEvent{ID:"71b8d743-ce3d-11e8-9f42-fa163e71c6be", Type:"ContainerStarted", Data:"640dd9fb594d887bbfff6c0c65714044b741a7f
3147 kuberuntime_manager.go:757] checking backoff for container "controller-manager" in pod "controller-manager-r5w6x_kube-service-catalog(71b8d743-ce3d-11e8-9f42-fa163e71c6be)"
3147 kuberuntime_manager.go:513] Container {Name:controller-manager Image:registry.access.redhat.com/openshift3/ose-service-catalog:v3.10 Command:[/usr/bin/service-catalog] Args:[controller-manager --secure-port 6443 -v 3 --leader-election-namespace kube-service-catalog --lea
3147 pod_workers.go:186] Error syncing pod 71b8d743-ce3d-11e8-9f42-fa163e71c6be ("controller-manager-r5w6x_kube-service-catalog(71b8d743-ce3d-11e8-9f42-fa163e71c6be)"), skipping: failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 10s restarti
3147 kuberuntime_manager.go:767] Back-off 10s restarting failed container=controller-manager pod=controller-manager-r5w6x_kube-service-catalog(71b8d743-ce3d-11e8-9f42-fa163e71c6be)
3147 kuberuntime_manager.go:757] checking backoff for container "controller-manager" in pod "controller-manager-r5w6x_kube-service-catalog(71b8d743-ce3d-11e8-9f42-fa163e71c6be)"
3147 kuberuntime_manager.go:513] Container {Name:controller-manager Image:registry.access.redhat.com/openshift3/ose-service-catalog:v3.10 Command:[/usr/bin/service-catalog] Args:[controller-manager --secure-port 6443 -v 3 --leader-election-namespace kube-service-catalog --lea
3147 kubelet.go:1914] SyncLoop (PLEG): "controller-manager-r5w6x_kube-service-catalog(71b8d743-ce3d-11e8-9f42-fa163e71c6be)", event: &pleg.PodLifecycleEvent{ID:"71b8d743-ce3d-11e8-9f42-fa163e71c6be", Type:"ContainerDied", Data:"cb2723ac8d06330f745a433991ad8731ff055a562d1d8f39
3147 prober.go:111] Readiness probe for "master-api-master-0_kube-system(49971b2bc841377f9081ba392d37185b):api" failed (failure): HTTP probe failed with statuscode: 403
3147 kubelet_node_status.go:400] Error updating node status, will retry: error getting node "master-0.com":
3147 status_manager.go:461] Failed to get status for pod "master-api-master-0.com_kube-system(49971b2bc841377f9081ba392d37185b)":
3147 prober.go:111] Readiness probe for "apiserver-hrzk4_openshift-template-service-broker(9445fef7-ce3d-11e8-9f42-fa163e71c6be):c" failed (failure): Get https://10.128.0.13:8443/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
3147 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service:
3147 kubelet.go:1914] SyncLoop (PLEG): "master-controllers-master-0.om_kube-system(8e879171c85e221fb0a023e3f10ca276)", event: &pleg.PodLifecycleEvent{ID:"8e879171c85e221fb0a023e3f10ca276", Type:"ContainerStarted", Data:"f957da8a0dd0de51bc936199
3147 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node:
3147 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod:
3147 prober.go:111] Readiness probe for "apiserver-hrzk4_openshift-template-service-broker(9445fef7-ce3d-11e8-9f42-fa163e71c6be):c" failed (failure): Get https://10.128.0.13:8443/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
3147 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service:
3147 event.go:209] Unable to write event: 'Patch https://openshift.internal.com:443/api/v1/namespaces/kube-system/events/master-api-master-0.com.155cfc57a4eab4c9: EOF' (may retry after sleeping)
3147 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node:
3147 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod:
----
////
The following examples are excerpts of redirected master log files at various
log levels. System information has been removed from these examples.

.Excerpt of `master-logs api api 2> file` output at loglevel=2

Expand Down Expand Up @@ -1975,7 +1945,6 @@ W1022 15:12:00.256861 1 swagger.go:38] No API exists for predefined swagge
W1022 15:12:00.258106 1 swagger.go:38] No API exists for predefined swagger description /api/v1
----


[[master-node-config-restart-services]]
== Restarting master and node services

Expand Down