-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Prevent upgrades of managed topologies while previous upgrade is not yet completed #6651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
/assign |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/lifecycle frozen |
I kind of have the impression that we should avoid to lock down the process too much unless we have strong reasons to do so, but at the same time it makes sense to enforce we are respecting version skew rules (which is also tracked in #7011). Let's consider if to scope down and dedup |
(doing some cleanup on old issues without updates) |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Just wanted to add that now I think the current behavior is actually a feature. Basically you can trigger another upgrade and don't have to wait for the previous one being finished. This is useful to avoid redundant rollouts and also allows bumping to higher patch versions if there was something wrong with the previous one which might have made it impossible to upgrade a Machine. |
What steps did you take and what happened:
If the topology version is upgraded to v2 while the cluster is in the middle of upgrading from v0 to v1 the control plane will eventually pick up v2 after upgrading to v1 before the Machine Deployments have been upgraded to v1.
This will lead to machine deployments being eventually upgraded from v0 to v2, skipping v1 entirely.
What did you expect to happen:
The upgrade of v1 should be completely done before the cluster starts upgrading to v2.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
/area topology
The text was updated successfully, but these errors were encountered: