You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/controllers/clusteroperator.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## Overview
4
4
5
-
Cluster operator controller is responsible for managing the CoreProvider and InfrastructureProvider CRs.
5
+
[Cluster operator controller](../../pkg/controllers/clusteroperator/clusteroperator_controller.go) is responsible for managing the CoreProvider and InfrastructureProvider CRs.
6
6
These CRs are later reconciled by the upstream [Cluster API Operator](https://github.com/kubernetes-sigs/cluster-api-operator).
[Core cluster controller](../../pkg/controllers/cluster/infra.go) is responsible for managing Cluster CRs. The cluster object will
6
+
represent the current cluster where operator is running because we treat this cluster as both [management and workload](https://cluster-api.sigs.k8s.io/user/concepts.html#management-cluster).
7
+
It's only purpose it to set `ControlPlaneInitialized` condition to true, in order to make Cluster API move the cluster
8
+
to provisioned phase. We don't manage control plane machines using Cluster API now.
[Infra cluster controller](../../pkg/controllers/cluster/infra.go) is responsible for managing InfrastructureCluster(AWSCluster, etc.) CRs.
6
+
The infrastructure cluster object will represent the [infrastructure of the cloud(AWS,Azure,GCP, etc.)](https://cluster-api.sigs.k8s.io/user/concepts.html#infrastructure-provider) where current cluster is running.
7
+
8
+
The controller will set the cluster [externally managed](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20210203-externally-managed-cluster-infrastructure.md) annotation `"cluster.x-k8s.io/managed-by"` and `Status.Ready` to `true` which indicates that the cluster is managed by the current controller and
Copy file name to clipboardExpand all lines: docs/controllers/kubeconfig.md
+14-6
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## Overview
4
4
5
-
Kubeconfig controller generates a secret containing kubeconfig for the cluster. The kubeconfig is generated from the service account for operator. The kubeconfig is consumed by core CAPI controllers to link nodes and machines.
5
+
[Kubeconfig controller](../../pkg/controllers/kubeconfig/kubeconfig.go) generates a secret containing kubeconfig for the cluster. The kubeconfig is generated from the service account for operator. The kubeconfig is consumed by core CAPI controllers to link nodes and machines.
If the current platform is not supported, the controller will not create any secret and allow "bring your own" scenarios.
23
27
In cases where the platform is supported, the controller will create the secret containing kubeconfig.
28
+
29
+
The controller will manage rotation of the service account secret that was initially created by the CVO. The token in the secret can exprire and has to
30
+
be rotated. The controller will periodically check if the secret is too old and if so, it will delete the secret and wait for
Copy file name to clipboardExpand all lines: docs/controllers/secretsync.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## Overview
4
4
5
-
Secret sync controller is responsible for syncing `worker-user-data` secret that is created by installer in `openshift-machine-api` namespace. The secret is used to store ignition configuration data for worker nodes.
5
+
[Secret sync controller](../../pkg/controllers/secretsync/secret_sync_controller.go) is responsible for syncing `worker-user-data` secret that is created by installer in `openshift-machine-api` namespace. The secret is used to store ignition configuration data for worker nodes.
Copy file name to clipboardExpand all lines: docs/provideronboarding.md
+24-3
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ In order to onboard a new CAPI provider, the following steps are required.
10
10
- Create an `openshift/` directory in the provider repository and make sure it includes:
11
11
- A [script]((https://github.com/openshift/cluster-api-provider-azure/blob/master/openshift/unit-tests.sh)) for running unit tests, it's required because of issue with $HOME in CI container.
12
12
-`Dockerfile.openshift` this Dockerfile will be used for downstream builds. Provider controller binary must be called
13
-
`cluster-api-provider-$providername-controller-manager` and be located in `/bin/` directory.
13
+
`cluster-api-provider-$providername-controller-manager` and be located in `/bin/` directory.[Example Dockerfile](https://github.com/openshift/cluster-api-provider-azure/blob/master/openshift/Dockerfile.openshift).
14
14
15
15
After provider fork is set up, you should onboard it to [Openshift CI](https://docs.ci.openshift.org/docs/how-tos/onboarding-a-new-component/) and make appropriate ART requests for downstream builds.
16
16
@@ -33,7 +33,28 @@ If you wish to make development of your provider easier, you can include a publi
33
33
34
34
## Add infrastructure cluster to the cluster controller
35
35
36
-
Cluster API requires an infrastructure cluster object to be present. In order to support the provider, you need to infrastructure cluster reconciliation for your provider. See `controllers/cluster/aws.go` for reference. It's important to
37
-
note that the cluster must have externally managed annotation `"cluster.x-k8s.io/managed-by"`(clusterv1.ManagedByAnnotation)
36
+
Cluster API requires an infrastructure cluster object to be present. We are using [externally managed infrastructure](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20210203-externally-managed-cluster-infrastructure.md)
37
+
feature to manage all the infrastructure clusters on Openshift. It means that
38
+
the cluster must have externally managed annotation `"cluster.x-k8s.io/managed-by"`(clusterv1.ManagedByAnnotation)
38
39
and `Status.Ready=true` to indicate that cluster object is managed by this controller and not by the
39
40
CAPI infrastructure provider.
41
+
42
+
In order to add a new infrastructure cluster to the cluster controller, you need setup the reconciler in `main.go`
0 commit comments