Skip to content

Commit f513e56

Browse files
committed
TELCODOCS-364, 370, 340, 261, 285: 4.10 consolidated RAN docs
1 parent 36ec506 commit f513e56

File tree

107 files changed

+5334
-1125
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

107 files changed

+5334
-1125
lines changed

_attributes/common-attributes.adoc

+7-2
Original file line numberDiff line numberDiff line change
@@ -132,5 +132,10 @@ endif::[]
132132
:ibmzProductName: IBM Z
133133
// Red Hat Quay Container Security Operator
134134
:rhq-cso: Red Hat Quay Container Security Operator
135-
:sno: single-node OpenShift
136-
:sno-caps: Single-node OpenShift
135+
:sno: single-node Openshift
136+
:sno-caps: Single-node Openshift
137+
//TALO and Redfish events Operators
138+
:cgu-operator-first: Topology Aware Lifecycle Manager (TALM)
139+
:cgu-operator-full: Topology Aware Lifecycle Manager
140+
:cgu-operator: TALM
141+
:redfish-operator: Bare Metal Event Relay

_topic_maps/_topic_map.yml

+4
Original file line numberDiff line numberDiff line change
@@ -2223,6 +2223,8 @@ Topics:
22232223
File: managing-alerts
22242224
- Name: Reviewing monitoring dashboards
22252225
File: reviewing-monitoring-dashboards
2226+
- Name: Monitoring bare-metal events
2227+
File: using-rfhe
22262228
- Name: Accessing third-party monitoring APIs
22272229
File: accessing-third-party-monitoring-apis
22282230
- Name: Troubleshooting monitoring issues
@@ -2278,6 +2280,8 @@ Topics:
22782280
Distros: openshift-origin,openshift-enterprise
22792281
- Name: Improving cluster stability in high latency environments using worker latency profiles
22802282
File: scaling-worker-latency-profiles
2283+
- Name: Topology Aware Lifecycle Manager for cluster updates
2284+
File: cnf-talm-for-cluster-upgrades
22812285
Distros: openshift-origin,openshift-enterprise
22822286
- Name: Creating a performance profile
22832287
File: cnf-create-performance-profiles
75.9 KB
Loading
Loading
Loading
Loading
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
// Module included in the following assemblies:
2+
//
3+
// *scalability_and_performance/ztp-deploying-disconnected.adoc
4+
5+
:_content-type: CONCEPT
6+
[id="about-ztp-and-distributed-units-on-openshift-clusters_{context}"]
7+
= About ZTP and distributed units on OpenShift clusters
8+
9+
You can install a distributed unit (DU) on {product-title} clusters at scale with {rh-rhacm-first} using the assisted installer (AI) and the policy generator with core-reduction technology enabled. The DU installation is done using zero touch provisioning (ZTP) in a disconnected environment.
10+
11+
{rh-rhacm} manages clusters in a hub-and-spoke architecture, where a single hub cluster manages many spoke clusters. {rh-rhacm} applies radio access network (RAN) policies from predefined custom resources (CRs). Hub clusters running ACM provision and deploy the spoke clusters using ZTP and AI. DU installation follows the AI installation of {product-title} on each cluster.
12+
13+
The AI service handles provisioning of {product-title} on single node clusters, three-node clusters, or standard clusters running on bare metal. ACM ships with and deploys the AI when the `MultiClusterHub` custom resource is installed.
14+
15+
With ZTP and AI, you can provision {product-title} clusters to run your DUs at scale. A high-level overview of ZTP for distributed units in a disconnected environment is as follows:
16+
17+
* A hub cluster running {rh-rhacm-first} manages a disconnected internal registry that mirrors the {product-title} release images. The internal registry is used to provision the spoke clusters.
18+
19+
* You manage the bare metal host machines for your DUs in an inventory file that uses YAML for formatting. You store the inventory file in a Git repository.
20+
21+
* You install the DU bare metal host machines on site, and make the hosts ready for provisioning. To be ready for provisioning, the following is required for each bare metal host:
22+
23+
** Network connectivity - including DNS for your network. Hosts should be reachable through the hub and managed spoke clusters. Ensure there is layer 3 connectivity between the hub and the host where you want to install your hub cluster.
24+
25+
** Baseboard Management Controller (BMC) details for each host - ZTP uses BMC details to connect the URL and credentials for accessing the BMC. ZTP manages the spoke cluster definition CRs, with the exception of the `BMCSecret` CR, which you create manually. These define the relevant elements for the managed clusters.

modules/baremetal-event-relay.adoc

+52
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * operators/operator-reference.adoc
4+
[id="baremetal-event-relay_{context}"]
5+
= {redfish-operator}
6+
7+
[discrete]
8+
== Purpose
9+
The OpenShift {redfish-operator} manages the life-cycle of the Bare Metal Event Relay. The Bare Metal Event Relay enables you to configure the types of cluster event that are monitored using Redfish hardware events.
10+
11+
[discrete]
12+
== Configuration Objects
13+
You can use this command to edit the configuration after installation: for example, the webhook port.
14+
You can edit configuration objects with:
15+
16+
[source,terminal]
17+
----
18+
$ oc -n [namespace] edit cm hw-event-proxy-operator-manager-config
19+
----
20+
21+
[source,terminal]
22+
----
23+
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
24+
kind: ControllerManagerConfig
25+
health:
26+
healthProbeBindAddress: :8081
27+
metrics:
28+
bindAddress: 127.0.0.1:8080
29+
webhook:
30+
port: 9443
31+
leaderElection:
32+
leaderElect: true
33+
resourceName: 6e7a703c.redhat-cne.org
34+
----
35+
36+
[discrete]
37+
== Project
38+
link:https://github.com/redhat-cne/hw-event-proxy-operator[hw-event-proxy-operator]
39+
40+
[discrete]
41+
== CRD
42+
The proxy enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure, reported using the HardwareEvent CR.
43+
44+
`hardwareevents.event.redhat-cne.org`:
45+
46+
* Scope: Namespaced
47+
* CR: HardwareEvent
48+
* Validation: Yes
49+
50+
[discrete]
51+
== Additional Resources
52+
You can learn more in the topic xref:../monitoring/using-rfhe.adoc[Monitoring Redfish hardware events].

0 commit comments

Comments
 (0)