Skip to content

Commit e82e395

Browse files
committed
[OSDOCS-13450]: Isolation details for HCP
1 parent 262ae25 commit e82e395

File tree

2 files changed

+48
-5
lines changed

2 files changed

+48
-5
lines changed

hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc

+6-5
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@ Before you get started with {hcp} for {product-title}, you must properly label n
1010

1111
* To ensure high availability and proper workload deployment. For example, you can set the `node-role.kubernetes.io/infra` label to avoid having the control-plane workload count toward your {product-title} subscription.
1212
* To ensure that control plane workloads are separate from other workloads in the management cluster.
13-
//lahinson - sept. 2023 - commenting out the following lines until those levels are supported for self-managed hypershift
14-
//* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels:
15-
//** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
16-
//** Request serving isolation: Serving pods are requested in their own dedicated nodes.
17-
//** Nothing shared: Every control plane has its own dedicated nodes.
13+
* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels:
14+
15+
** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
16+
** Request serving isolation: Serving pods are requested in their own dedicated nodes.
17+
** Nothing shared: Every control plane has its own dedicated nodes.
1818

1919
[IMPORTANT]
2020
====
@@ -24,3 +24,4 @@ Do not use the management cluster for your workload. Workloads must not run on n
2424
include::modules/hcp-labels-taints.adoc[leveloffset=+1]
2525
include::modules/hcp-priority-classes.adoc[leveloffset=+1]
2626
include::modules/hcp-virt-taints-tolerations.adoc[leveloffset=+1]
27+
include::modules/hcp-isolation.adoc[leveloffset=+1]

modules/hcp-isolation.adoc

+42
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="hcp-isolation_{context}"]
7+
= Control plane isolation
8+
9+
You can configure {hcp} to isolate network traffic or control plane pods.
10+
11+
== Network policy isolation
12+
13+
Each hosted control plane is assigned to run in a dedicated Kubernetes namespace. By default, the Kubernetes namespace denies all network traffic.
14+
15+
The following network traffic is allowed through the network policy that is enforced by the Kubernetes Container Network Interface (CNI):
16+
17+
* Ingress pod-to-pod communication in the same namespace (intra-tenant)
18+
* Ingress on port 6443 to the hosted `kube-apiserver` pod for the tenant
19+
* Metric scraping from the management cluster Kubernetes namespace with the `network.openshift.io/policy-group: monitoring` label is allowed for monitoring
20+
21+
== Control plane pod isolation
22+
23+
In addition to network policies, each hosted control plane pod is run with the `restricted` security context constraint. This policy denies access to all host features and requires pods to be run with a UID and with SELinux context that is allocated uniquely to each namespace that hosts a customer control plane.
24+
25+
The policy ensures the following constraints:
26+
* Pods cannot run as privileged.
27+
* Pods cannot mount host directory volumes.
28+
* Pods must run as a user in a pre-allocated range of UIDs.
29+
* Pods must run with a pre-allocated MCS label.
30+
* Pods cannot access the host network namespace.
31+
* Pods cannot expose host network ports.
32+
* Pods cannot access the host PID namespace.
33+
* By default, pods drop the following Linux capabilities: `KILL`, `MKNOD`, `SETUID`, and `SETGID`.
34+
35+
The management components, such as `kubelet` and `crio`, on each management cluster worker node are protected by an SELinux label that is not accessible to the SELinux context for pods that support {hcp}.
36+
37+
The following SELinux labels are used for key processes and sockets:
38+
39+
* *kubelet*: `system_u:system_r:unconfined_service_t:s0`
40+
* *crio*: `system_u:system_r:container_runtime_t:s0`
41+
* *crio.sock*: `system_u:object_r:container_var_run_t:s0`
42+
* *<example user container processes>*: `system_u:system_r:container_t:s0:c14,c24`

0 commit comments

Comments
 (0)