Skip to content

Commit 68509c6

Browse files
committed
final review comments
1 parent 76a0206 commit 68509c6

12 files changed

+51
-52
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/low_latency_tuning/cnf-understanding-low-latency.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="cnf-about-hyper-threading-for-low-latency-and-real-time-applications_{context}"]
7+
= About Hyper-Threading for low latency and real-time applications
8+
9+
Hyper-Threading is an Intel processor technology that allows a physical CPU processor core to function as two logical cores, executing two independent threads simultaneously. Hyper-Threading allows for better system throughput for certain workload types where parallel processing is beneficial. The default {product-title} configuration expects Hyper-Threading to be enabled by default.
10+
11+
For telecommunications applications, it is important to design your application infrastructure to minimize latency as much as possible. Hyper-Threading can slow performance times and negatively affect throughput for compute intensive workloads that require low latency. Disabling Hyper-Threading ensures predictable performance and can decrease processing times for these workloads.
12+
13+
[NOTE]
14+
====
15+
Hyper-Threading implementation and configuration differs depending on the hardware you are running {product-title} on. Consult the relevant host hardware tuning information for more details of the Hyper-Threading implementation specific to that hardware. Disabling Hyper-Threading can increase the cost per core of the cluster.
16+
====

modules/cnf-about_hyperthreading_for_low_latency_and_real_time_applications.adoc

-17
This file was deleted.

modules/cnf-configure_for_irq_dynamic_load_balancing.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// Module included in the following assemblies:
1+
// Module included in the following assemblies:
22
//
33
// * scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc
44

modules/configuring_hyperthreading_for_a_cluster.adoc renamed to modules/cnf-configuring-hyperthreading-for-a-cluster.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// * scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc
44

55
:_mod-docs-content-type: PROCEDURE
6-
[id="configuring_hyperthreading_for_a_cluster_{context}"]
6+
[id="cnf-configuring-hyperthreading-for-a-cluster_{context}"]
77
= Configuring Hyper-Threading for a cluster
88

99
To configure Hyper-Threading for an {product-title} cluster, set the CPU threads in the performance profile to the same cores that are configured for the reserved or isolated CPU pools.

modules/cnf-how-run-podman-to-create-profile.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// * scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc
44

55
[id="how-to-run-podman-to-create-a-profile_{context}"]
6-
= How to run `podman` to create a performance profile
6+
= How to run podman to create a performance profile
77

88
The following example illustrates how to run `podman` to create a performance profile with 20 reserved CPUs that are to be split across the NUMA nodes.
99

modules/cnf-running-the-performance-creator-profile.adoc

+4-4
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="running-the-performance-profile-profile-cluster-using-podman_{context}"]
7-
= Running the Performance Profile Creator using podman
7+
= Running the Performance Profile Creator using Podman
88

99
As a cluster administrator, you can run `podman` and the Performance Profile Creator to create a performance profile.
1010

@@ -82,10 +82,10 @@ Flags:
8282
+
8383
[NOTE]
8484
====
85-
Discovery mode inspects your cluster using the output from `must-gather`. The output produced includes information on:
85+
Discovery mode inspects your cluster by using the output from `must-gather`. The output produced includes information on:
8686

8787
* The NUMA cell partitioning with the allocated CPU ids
88-
* Whether hyperthreading is enabled
88+
* Whether Hyper-Threading is enabled
8989
9090
Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool.
9191
====
@@ -102,7 +102,7 @@ This command uses the performance profile creator as a new entry point to `podma
102102
The `-v` option can be the path to either:
103103
104104
* The `must-gather` output directory
105-
* An existing directory containing the `must-gather` decompressed tarball
105+
* An existing directory containing the `must-gather` decompressed .tar file
106106
107107
The `info` option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging.
108108
====

modules/cnf-scheduling-workload-onto-worker-with-real-time-capabilities.adoc

+13-13
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@
33
// * scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc
44

55
[id="cnf-scheduling-workload-onto-worker-with-real-time-capabilities_{context}"]
6-
= Scheduling a workload onto a worker with real-time capabilities
6+
= Scheduling a low latency workload onto a worker with real-time capabilities
77

8-
You can schedule realtime workloads onto a worker node where a performance profile that configures real-time capabilities is applied.
8+
You can schedule low latency workloads onto a worker node where a performance profile that configures real-time capabilities is applied.
99

1010
[NOTE]
1111
====
@@ -19,19 +19,19 @@ The label selectors must match the nodes that are attached to the machine config
1919

2020
* You have logged in as a user with `cluster-admin` privileges.
2121

22-
* You have applied a performance profile in the cluster that tunes worker nodes for realtime workloads.
22+
* You have applied a performance profile in the cluster that tunes worker nodes for low latency workloads.
2323

2424
.Procedure
2525

26-
. Create a `Pod` CR for the realtime workload and apply it in the cluster, for example:
26+
. Create a `Pod` CR for the low latency workload and apply it in the cluster, for example:
2727
+
2828
.Example `Pod` spec configured to use real-time processing
2929
[source,yaml,subs="attributes+"]
3030
----
3131
apiVersion: v1
3232
kind: Pod
3333
metadata:
34-
name: dynamic-realtime-pod
34+
name: dynamic-low-latency-pod
3535
annotations:
3636
cpu-quota.crio.io: "disable" <1>
3737
cpu-load-balancing.crio.io: "disable" <2>
@@ -42,7 +42,7 @@ spec:
4242
seccompProfile:
4343
type: RuntimeDefault
4444
containers:
45-
- name: dynamic-realtime-pod
45+
- name: dynamic-low-latency-pod
4646
image: "registry.redhat.io/openshift4/cnf-tests-rhel8:v{product-version}"
4747
command: ["sleep", "10h"]
4848
resources:
@@ -58,7 +58,7 @@ spec:
5858
drop: [ALL]
5959
nodeSelector:
6060
node-role.kubernetes.io/worker-cnf: "" <4>
61-
runtimeClassName: performance-dynamic-realtime-profile <5>
61+
runtimeClassName: performance-dynamic-low-latency-profile <5>
6262
# ...
6363
----
6464
<1> Disables the CPU completely fair scheduler (CFS) quota at the pod run time
@@ -67,7 +67,7 @@ spec:
6767
<4> The `nodeSelector` label must match the label that you specify in the `Node` CR.
6868
<5> `runtimeClassName` must match the name of the performance profile configured in the cluster.
6969

70-
. Enter the pod `runtimeClassName` in the form performance-<profile_name>, where <profile_name> is the `name` from the `PerformanceProfile` YAML, in this example, `performance-dynamic-realtime-profile`.
70+
. Enter the pod `runtimeClassName` in the form performance-<profile_name>, where <profile_name> is the `name` from the `PerformanceProfile` YAML, in this example, `performance-dynamic-low-latency-profile`.
7171

7272
. Ensure the pod is running correctly. Status should be `running`, and the correct cnf-worker node should be set:
7373
+
@@ -80,15 +80,15 @@ $ oc get pod -o wide
8080
+
8181
[source,terminal]
8282
----
83-
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
84-
dynamic-realtime-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none>
83+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
84+
dynamic-low-latency-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none>
8585
----
8686

8787
. Get the CPUs that the pod configured for IRQ dynamic load balancing runs on:
8888
+
8989
[source,terminal]
9090
----
91-
$ oc exec -it dynamic-realtime-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print $2}'"
91+
$ oc exec -it dynamic-low-latency-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print $2}'"
9292
----
9393
+
9494
.Expected output
@@ -131,7 +131,7 @@ sh-4.4# chroot /host
131131
sh-4.4#
132132
----
133133

134-
. Ensure the default system CPU affinity mask does not include the `dynamic-realtime-pod` CPUs, for example, CPUs 2 and 3.
134+
. Ensure the default system CPU affinity mask does not include the `dynamic-low-latency-pod` CPUs, for example, CPUs 2 and 3.
135135
+
136136
[source,terminal]
137137
----
@@ -145,7 +145,7 @@ $ cat /proc/irq/default_smp_affinity
145145
33
146146
----
147147

148-
. Ensure the system IRQs are not configured to run on the `dynamic-realtime-pod` CPUs:
148+
. Ensure the system IRQs are not configured to run on the `dynamic-low-latency-pod` CPUs:
149149
+
150150
[source,terminal]
151151
----

modules/cnf-understanding-low-latency.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,6 @@ Workload hints simplify the fine-tuning of performance to industry sector settin
3737
* Real-time capability
3838
* Efficient use of power
3939
40-
In an ideal world, all of those would be prioritized: in real life, some come at the expense of others. The Node Tuning Operator is now aware of the workload expectations and better able to meet the demands of the workload. The cluster admin can now specify into which use case that workload falls. The Node Tuning Operator uses the `PerformanceProfile` to fine tune the performance settings for the workload.
40+
Ideally, all of the previously listed items are prioritized. Some of these items come at the expense of others however. The Node Tuning Operator is now aware of the workload expectations and better able to meet the demands of the workload. The cluster admin can now specify into which use case that workload falls. The Node Tuning Operator uses the `PerformanceProfile` to fine tune the performance settings for the workload.
4141

4242
The environment in which an application is operating influences its behavior. For a typical data center with no strict latency requirements, only minimal default tuning is needed that enables CPU partitioning for some high performance workload pods. For data centers and workloads where latency is a higher priority, measures are still taken to optimize power consumption. The most complicated cases are clusters close to latency-sensitive equipment such as manufacturing machinery and software-defined radios. This last class of deployment is often referred to as Far edge. For Far edge deployments, ultra-low latency is the ultimate priority, and is achieved at the expense of power management.

scalability_and_performance/low_latency_tuning/cnf-debugging-low-latency-tuning-status.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,6 @@ include::modules/cnf-collecting-low-latency-tuning-debugging-data-for-red-hat-su
2121
2222
* xref:../../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[Using the Node Tuning Operator]
2323
24-
* xref:../../scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc#configuring-huge-pages_huge-pages[Configuring huge pages]
24+
* xref:../../scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc#configuring-huge-pages_huge-pages[Configuring huge pages at boot time]
2525
2626
* xref:../../scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc#how-huge-pages-are-consumed-by-apps_huge-pages[How huge pages are consumed by apps]

scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc

+3-3
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@ Many organizations need high performance computing and low, predictable latency,
1010

1111
{product-title} provides the Node Tuning Operator to implement automatic tuning to achieve low latency performance and consistent response time for {product-title} applications.
1212
You use the performance profile configuration to make these changes.
13-
You can update the kernel to kernel-rt (real-time), reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, isolate CPUs for application containers to run the workloads, and disable unused CPUs to reduce power consumption.
13+
You can update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, isolate CPUs for application containers to run the workloads, and disable unused CPUs to reduce power consumption.
1414

1515
[NOTE]
1616
====
17-
When writing your applications, follow the general recommendations described in link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_for_real_time/8/html-single/tuning_guide/index#chap-Application_Tuning_and_Deployment[Application tuning and deployment].
17+
When writing your applications, follow the general recommendations described in link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_for_real_time/9/html-single/understanding_rhel_for_real_time/index#assembly_rhel-for-real-time-processes-and-threads_understanding-RHEL-for-Real-Time-core-concepts[RHEL for Real Time processes and threads].
1818
====
1919

2020
[role="_additional-resources"]
@@ -47,7 +47,7 @@ include::modules/cnf-disabling-cpu-cfs-quota.adoc[leveloffset=+1]
4747
[role="_additional-resources"]
4848
.Additional resources
4949

50-
* For more information about recommended firmware configuration, see xref:../../edge_computing/ztp-vdu-validating-cluster-tuning.adoc#ztp-du-firmware-config-reference_vdu-config-ref[Recommended firmware configuration for vDU cluster hosts].
50+
* xref:../../edge_computing/ztp-vdu-validating-cluster-tuning.adoc#ztp-du-firmware-config-reference_vdu-config-ref[Recommended firmware configuration for vDU cluster hosts]
5151
5252
include::modules/cnf-disabling-interrupt-processing-for-individual-pods.adoc[leveloffset=+1]
5353

scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc

+7-7
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ include::_attributes/common-attributes.adoc[]
77
toc::[]
88

99
Tune nodes for low latency by using the cluster performance profile.
10-
You can restrict CPUs for infra and application containers, configure huge pages and Hyper-Threading, and configure CPU partitions for latency-sensitive processes.
10+
You can restrict CPUs for infra and application containers, configure huge pages, Hyper-Threading, and configure CPU partitions for latency-sensitive processes.
1111

1212
[role="_additional-resources"]
1313
.Additional resources
@@ -21,22 +21,22 @@ Learn about the Performance Profile Creator (PPC) and how you can use it to crea
2121

2222
[NOTE]
2323
====
24-
Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles.
24+
Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the required behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles.
2525
====
2626

2727
include::modules/cnf-about-the-profile-creator-tool.adoc[leveloffset=+2]
2828

29-
include::modules/cnf-gathering-data-about-cluster-using-must-gather.adoc[leveloffset=+3]
29+
include::modules/cnf-gathering-data-about-cluster-using-must-gather.adoc[leveloffset=+2]
3030

31-
include::modules/cnf-running-the-performance-creator-profile.adoc[leveloffset=+3]
31+
include::modules/cnf-running-the-performance-creator-profile.adoc[leveloffset=+2]
3232

3333
[role="_additional-resources"]
3434
.Additional resources
3535

3636
* For more information about the `must-gather` tool,
3737
see xref:../../support/gathering-cluster-data.adoc#nodes-nodes-managing[Gathering data about your cluster].
3838

39-
include::modules/cnf-how-run-podman-to-create-profile.adoc[leveloffset=+4]
39+
include::modules/cnf-how-run-podman-to-create-profile.adoc[leveloffset=+3]
4040

4141
include::modules/cnf-running-the-performance-creator-profile-offline.adoc[leveloffset=+3]
4242

@@ -62,13 +62,13 @@ include::modules/cnf-configuring-power-saving-for-nodes.adoc[leveloffset=+1]
6262
[role="_additional-resources"]
6363
.Additional resources
6464

65-
* xref:../../scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc#cnf-configuring-high-priority-workload-pods_cnf-provisioning-low-latency[Configuring high priority workload pods]
65+
* xref:../../scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc#cnf-configuring-high-priority-workload-pods_cnf-provisioning-low-latency[Disabling power saving mode for high priority pods]
6666

6767
* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#managing-device-interrupt-processing-for-guaranteed-pod-isolated-cpus_cnf-low-latency-perf-profile[Managing device interrupt processing for guaranteed pod isolated CPUs]
6868

6969
include::modules/cnf-cpu-infra-container.adoc[leveloffset=+1]
7070

71-
include::modules/configuring_hyperthreading_for_a_cluster.adoc[leveloffset=+1]
71+
include::modules/cnf-configuring-hyperthreading-for-a-cluster.adoc[leveloffset=+1]
7272

7373
include::modules/cnf-managing-device-interrupt-processing-for-guaranteed-pod-isolated-cpus.adoc[leveloffset=+1]
7474

scalability_and_performance/low_latency_tuning/cnf-understanding-low-latency.adoc

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,15 +6,15 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9-
Edge computing plays a key role in reducing latency and congestion problems and improving application performance for telco and 5G network applications.
9+
Edge computing has a key role in reducing latency and congestion problems and improving application performance for telco and 5G network applications.
1010
Maintaining a network architecture with the lowest possible latency is key for meeting the network performance requirements of 5G.
1111
Compared to 4G technology, with an average latency of 50 ms, 5G is targeted to reach latency of 1 ms or less. This reduction in latency boosts wireless throughput by a factor of 10.
1212

1313
include::modules/cnf-understanding-low-latency.adoc[leveloffset=+1]
1414

15-
include::modules/cnf-about_hyperthreading_for_low_latency_and_real_time_applications.adoc[leveloffset=+1]
15+
include::modules/cnf-about-hyperthreading-for-low-latency-and-real-time-applications.adoc[leveloffset=+1]
1616

1717
[role="_additional-resources"]
1818
.Additional resources
1919

20-
* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#configuring_hyperthreading_for_a_cluster_cnf-low-latency-perf-profile[Configuring Hyper-Threading for a cluster]
20+
* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-hyperthreading-for-a-cluster_cnf-low-latency-perf-profile[Configuring Hyper-Threading for a cluster]

0 commit comments

Comments
 (0)