Skip to content

Commit f4d4f37

Browse files
Update docs
1 parent 07cf06f commit f4d4f37

8 files changed

+84
-41
lines changed

README.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,8 @@ represents current cluster, it is treated as management and workload cluster at
2121

2222
Controllers design can be found here:
2323
- [ClusterOperator Controller](docs/controllers/clusteroperator.md)
24-
- [Cluster Controller](docs/controllers/cluster.md)
24+
- [Core cluster Controller](docs/controllers/core-cluster.md)
25+
- [Infra cluster Controller](docs/controllers/infra-cluster.md)
2526
- [Secret sync Controller](docs/controllers/secretsync.md)
2627
- [Kubeconfig Controller](docs/controllers/kubeconfig.md)
2728

docs/controllers/cluster.md

-29
This file was deleted.

docs/controllers/clusteroperator.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Overview
44

5-
Cluster operator controller is responsible for managing the CoreProvider and InfrastructureProvider CRs.
5+
[Cluster operator controller](../../pkg/controllers/clusteroperator/clusteroperator_controller.go) is responsible for managing the CoreProvider and InfrastructureProvider CRs.
66
These CRs are later reconciled by the upstream [Cluster API Operator](https://github.com/kubernetes-sigs/cluster-api-operator).
77

88
## Behavior

docs/controllers/core-cluster.md

+20
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Core Cluster controller
2+
3+
## Overview
4+
5+
[Core cluster controller](../../pkg/controllers/cluster/infra.go) is responsible for managing Cluster CRs. The cluster object will
6+
represent the current cluster where operator is running because we treat this cluster as both [management and workload](https://cluster-api.sigs.k8s.io/user/concepts.html#management-cluster).
7+
It's only purpose it to set `ControlPlaneInitialized` condition to true, in order to make Cluster API move the cluster
8+
to provisioned phase. We don't manage control plane machines using Cluster API now.
9+
10+
## Behavior
11+
12+
```mermaid
13+
stateDiagram-v2
14+
[*] --> GetCluster
15+
GetCluster --> IsDeletionTimestampPresent
16+
state IsDeletionTimestampPresent <<choice>>
17+
IsDeletionTimestampPresent --> [*]: True
18+
IsDeletionTimestampPresent --> SetControlPlaneInitializedCondition: False
19+
SetControlPlaneInitializedCondition --> [*]
20+
```

docs/controllers/infra-cluster.md

+22
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
# Infra cluster controller
2+
3+
## Overview
4+
5+
[Infra cluster controller](../../pkg/controllers/cluster/infra.go) is responsible for managing InfrastructureCluster(AWSCluster, etc.) CRs.
6+
The infrastructure cluster object will represent the [infrastructure of the cloud(AWS,Azure,GCP, etc.)](https://cluster-api.sigs.k8s.io/user/concepts.html#infrastructure-provider) where current cluster is running.
7+
8+
The controller will set the cluster [externally managed](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20210203-externally-managed-cluster-infrastructure.md) annotation `"cluster.x-k8s.io/managed-by"` and `Status.Ready` to `true` which indicates that the cluster is managed by the current controller and
9+
not managed by the CAPI infrastructure provider.
10+
11+
## Behavior
12+
13+
```mermaid
14+
stateDiagram-v2
15+
[*] --> GetInfraCluster
16+
GetInfraCluster --> IsDeletionTimestampPresent
17+
state IsDeletionTimestampPresent <<choice>>
18+
IsDeletionTimestampPresent --> [*]: True
19+
IsDeletionTimestampPresent --> SetExternallyManagedAnnotation: False
20+
SetExternallyManagedAnnotation --> SetInfrastructureClusterStatusReady
21+
SetInfrastructureClusterStatusReady --> [*]
22+
```

docs/controllers/kubeconfig.md

+14-6
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Overview
44

5-
Kubeconfig controller generates a secret containing kubeconfig for the cluster. The kubeconfig is generated from the service account for operator. The kubeconfig is consumed by core CAPI controllers to link nodes and machines.
5+
[Kubeconfig controller](../../pkg/controllers/kubeconfig/kubeconfig.go) generates a secret containing kubeconfig for the cluster. The kubeconfig is generated from the service account for operator. The kubeconfig is consumed by core CAPI controllers to link nodes and machines.
66

77
## Behavior
88

@@ -11,13 +11,21 @@ stateDiagram-v2
1111
[*] --> IsCurrentPlatformSupported
1212
state IsCurrentPlatformSupported <<choice>>
1313
IsCurrentPlatformSupported --> NoOp: False
14-
IsCurrentPlatformSupported --> GetOperatorServiceAccount: True
15-
GetOperatorServiceAccount --> GetServiceAccountSecret
16-
GetServiceAccountSecret --> GenerateKubeconfigFromSecret
17-
GenerateKubeconfigFromSecret --> CreateKubeconfigSecret
18-
CreateKubeconfigSecret --> [*]
14+
IsCurrentPlatformSupported --> GetOperatorServiceAccountSecret: True
15+
GetOperatorServiceAccountSecret --> IsServiceAccountSecretFound
16+
IsServiceAccountSecretFound --> IsServiceAccountSecretTooOld: True
17+
IsServiceAccountSecretTooOld --> GenerateKubeconfig: False
18+
GenerateKubeconfig --> [*]
19+
IsServiceAccountSecretFound --> Requeue: False
20+
Requeue --> GetOperatorServiceAccountSecret
21+
IsServiceAccountSecretTooOld --> DeleterviceAccountSecret: True
22+
DeleterviceAccountSecret --> Requeue
1923
NoOp --> [*]
2024
```
2125

2226
If the current platform is not supported, the controller will not create any secret and allow "bring your own" scenarios.
2327
In cases where the platform is supported, the controller will create the secret containing kubeconfig.
28+
29+
The controller will manage rotation of the service account secret that was initially created by the CVO. The token in the secret can exprire and has to
30+
be rotated. The controller will periodically check if the secret is too old and if so, it will delete the secret and wait for
31+
CVO to create a new one.

docs/controllers/secretsync.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Overview
44

5-
Secret sync controller is responsible for syncing `worker-user-data` secret that is created by installer in `openshift-machine-api` namespace. The secret is used to store ignition configuration data for worker nodes.
5+
[Secret sync controller](../../pkg/controllers/secretsync/secret_sync_controller.go) is responsible for syncing `worker-user-data` secret that is created by installer in `openshift-machine-api` namespace. The secret is used to store ignition configuration data for worker nodes.
66

77
## Behavior
88

docs/provideronboarding.md

+24-3
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ In order to onboard a new CAPI provider, the following steps are required.
1010
- Create an `openshift/` directory in the provider repository and make sure it includes:
1111
- A [script]((https://github.com/openshift/cluster-api-provider-azure/blob/master/openshift/unit-tests.sh)) for running unit tests, it's required because of issue with $HOME in CI container.
1212
- `Dockerfile.openshift` this Dockerfile will be used for downstream builds. Provider controller binary must be called
13-
`cluster-api-provider-$providername-controller-manager` and be located in `/bin/` directory.
13+
`cluster-api-provider-$providername-controller-manager` and be located in `/bin/` directory. [Example Dockerfile](https://github.com/openshift/cluster-api-provider-azure/blob/master/openshift/Dockerfile.openshift).
1414

1515
After provider fork is set up, you should onboard it to [Openshift CI](https://docs.ci.openshift.org/docs/how-tos/onboarding-a-new-component/) and make appropriate ART requests for downstream builds.
1616

@@ -33,7 +33,28 @@ If you wish to make development of your provider easier, you can include a publi
3333

3434
## Add infrastructure cluster to the cluster controller
3535

36-
Cluster API requires an infrastructure cluster object to be present. In order to support the provider, you need to infrastructure cluster reconciliation for your provider. See `controllers/cluster/aws.go` for reference. It's important to
37-
note that the cluster must have externally managed annotation `"cluster.x-k8s.io/managed-by"`(clusterv1.ManagedByAnnotation)
36+
Cluster API requires an infrastructure cluster object to be present. We are using [externally managed infrastructure](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20210203-externally-managed-cluster-infrastructure.md)
37+
feature to manage all the infrastructure clusters on Openshift. It means that
38+
the cluster must have externally managed annotation `"cluster.x-k8s.io/managed-by"`(clusterv1.ManagedByAnnotation)
3839
and `Status.Ready=true` to indicate that cluster object is managed by this controller and not by the
3940
CAPI infrastructure provider.
41+
42+
In order to add a new infrastructure cluster to the cluster controller, you need setup the reconciler in `main.go`
43+
like this:
44+
45+
```golang
46+
func setupInfraClusterReconciler(mgr manager.Manager, platform configv1.PlatformType) {
47+
switch platform {
48+
...
49+
case configv1.YourPlatformType:
50+
if err := (&cluster.GenericInfraClusterReconciler{
51+
ClusterOperatorStatusClient: getClusterOperatorStatusClient(mgr, "cluster-capi-operator-infra-cluster-resource-controller"),
52+
InfraCluster: &platformv1.YourPlatformCluster{},
53+
}).SetupWithManager(mgr); err != nil {
54+
klog.Error(err, "unable to create controller", "controller", "YourPlatformCluster")
55+
os.Exit(1)
56+
}
57+
...
58+
}
59+
}
60+
```

0 commit comments

Comments
 (0)