-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Document how to attach worker nodes to any control planes #2080
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@michaelgugino to add in details on how OpenShift does this with Cluster API |
@michaelgugino - I would be interested to hear how you approach this. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
/kind documentation |
@vincepri: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/close |
@vincepri: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
User Story
I wanted to see if this use case has any merit.
A user could potentially choose to deploy the control plane for a cluster with non-cluster-api tools (kubespray, kops, and others), and then manage only the worker nodes with the cluster-api.
Here's why a user might find this desirable:
Detailed Description
The cluster bootstrap would proceed like this:
This is already possible with a few manual steps:
This will allow the boostrapper to jump this logic (https://github.com/kubernetes-sigs/cluster-api-bootstrap-provider-kubeadm/blob/master/controllers/kubeadmconfig_controller.go#L186) and proceed with setting up worker nodes.
Anything else you would like to add:
Because the cluster-api CRDs are created with the status subresource enabled, the status field cannot be directly modified by kubectl commands.
Annotations could potentially be set for
controlPlaneInitialized
andapiEndpoints
to allow manual overrides of the fields on the Cluster.Status./kind feature
The text was updated successfully, but these errors were encountered: