Skip to content

[OSDOCS-13450]: Isolation details for HCP #91473

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@ Before you get started with {hcp} for {product-title}, you must properly label n

* To ensure high availability and proper workload deployment. For example, you can set the `node-role.kubernetes.io/infra` label to avoid having the control-plane workload count toward your {product-title} subscription.
* To ensure that control plane workloads are separate from other workloads in the management cluster.
//lahinson - sept. 2023 - commenting out the following lines until those levels are supported for self-managed hypershift
//* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels:
//** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
//** Request serving isolation: Serving pods are requested in their own dedicated nodes.
//** Nothing shared: Every control plane has its own dedicated nodes.
* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels:

** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
** Request serving isolation: Serving pods are requested in their own dedicated nodes.
** Nothing shared: Every control plane has its own dedicated nodes.

[IMPORTANT]
====
Expand All @@ -24,3 +24,4 @@ Do not use the management cluster for your workload. Workloads must not run on n
include::modules/hcp-labels-taints.adoc[leveloffset=+1]
include::modules/hcp-priority-classes.adoc[leveloffset=+1]
include::modules/hcp-virt-taints-tolerations.adoc[leveloffset=+1]
include::modules/hcp-isolation.adoc[leveloffset=+1]
42 changes: 42 additions & 0 deletions modules/hcp-isolation.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc

:_mod-docs-content-type: CONCEPT
[id="hcp-isolation_{context}"]
= Control plane isolation

You can configure {hcp} to isolate network traffic or control plane pods.

== Network policy isolation

Each hosted control plane is assigned to run in a dedicated Kubernetes namespace. By default, the Kubernetes namespace denies all network traffic.

The following network traffic is allowed through the network policy that is enforced by the Kubernetes Container Network Interface (CNI):

* Ingress pod-to-pod communication in the same namespace (intra-tenant)
* Ingress on port 6443 to the hosted `kube-apiserver` pod for the tenant
* Metric scraping from the management cluster Kubernetes namespace with the `network.openshift.io/policy-group: monitoring` label is allowed for monitoring

== Control plane pod isolation

In addition to network policies, each hosted control plane pod is run with the `restricted` security context constraint. This policy denies access to all host features and requires pods to be run with a UID and with SELinux context that is allocated uniquely to each namespace that hosts a customer control plane.

The policy ensures the following constraints:
* Pods cannot run as privileged.
* Pods cannot mount host directory volumes.
* Pods must run as a user in a pre-allocated range of UIDs.
* Pods must run with a pre-allocated MCS label.
* Pods cannot access the host network namespace.
* Pods cannot expose host network ports.
* Pods cannot access the host PID namespace.
* By default, pods drop the following Linux capabilities: `KILL`, `MKNOD`, `SETUID`, and `SETGID`.

The management components, such as `kubelet` and `crio`, on each management cluster worker node are protected by an SELinux label that is not accessible to the SELinux context for pods that support {hcp}.

The following SELinux labels are used for key processes and sockets:

* *kubelet*: `system_u:system_r:unconfined_service_t:s0`
* *crio*: `system_u:system_r:container_runtime_t:s0`
* *crio.sock*: `system_u:object_r:container_var_run_t:s0`
* *<example user container processes>*: `system_u:system_r:container_t:s0:c14,c24`