-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Ability to manage core cluster deployments and daemonsets prior to upgrading a ClusterClass managed cluster #5230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
/area topology |
Recap of the specific use case:
WRT to detailed description, while I understand that this can fall into the bucket of a generic idea of Lifecycle hooks, I personally think we should scope down the problem because in a CAPI cluster there are many objects and each one has it's own lifecycle. it is impractical to tackle all of them together. A tricky point is also the fact that Cluster API does not owns the lifecycle of the control plane, which is change of a provider (KCP in this example, but it could be anything). Cluster with ClusterClass/Managed topologies could provide a solution to this point, but the solution won't work with unmanaged Clusters. Is this acceptable for this use case? Is this acceptable as a generic approach (lifecycle hooks only on managed topologies at Cluster level)? Restricting the scope will also help in starting to figure out where those hooks should implemented (in which controller), and how much this can be invasive on the existing code. /milestone v1.0 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The is going to provide the foundation for this work; after this is merged we should check if there are missing items to be addressed of if we can close this issue |
@randomvariable can we re-assess requirement here now that runtime SDK is available |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
User Story
As a vendor of a product using Cluster API I would like to manually control kube-proxy and CoreDNS deployments as well as additional machine labels during a ClusterClass managed cluster upgrade. In this environment, there's no registry from which etcd, coredns, etc... OCI images can be retrieved from, but all images are baked into the machine images using https://github.com/kubernetes-sigs/image-builder . We use the opt-out annotations of KCP to allow us to manage these addons ourselves.
Because each OS image only contains the images specific for that version, we need to ensure that the correct CoreDNS deployment and kube-proxy daemonset is applied. We use per-Kubernetes-version node selectors to target the applied pods for each machine with the matching k8s version used in image builder.
Detailed Description
Related to #5222
This is a specific use case, but you could in theory generalise this into the need for lifecycle hooks for various moments in the lifecycle of a cluster, with the specific one being the moment when the version field is changed in the clusterclass and before KCP does any change to realise that state.
Anything else you would like to add:
I'm also wondering in this specific use case, we should be doing some of the above in KCP/ClusterClass anyway:
/kind feature
The text was updated successfully, but these errors were encountered: