-
Notifications
You must be signed in to change notification settings - Fork 37
Orphaned CloudStack VM's present in slow CloudStack environments #190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This issue is also similar to kubernetes-sigs/cluster-api#7237 in that it's due to MHC acting aggressively during VM creation, and causing a race condition between the CAPI resources and the K8s/ACS resources. With #190 being fixed, we may still see orphaned K8s nodes, but they will not have associated ACS VM's associated with them. Addressing aws/eks-anywhere#3918 may also help minimize the number of orphaned K8s nodes. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Uh oh!
There was an error while loading. Please reload this page.
/kind bug
What steps did you take and what happened:
In slow CloudStack environments, we have observed the following race condition:
What did you expect to happen:
I expected there not to be any rogue VM's and a 1:1 mapping between CloudStackMachine and CloudStack VM's.
Anything else you would like to add:
Resolution is to modify the VM deletion logic. in cloudstackmachine_controller, if
Spec.InstanceID
is not set, look it up from VM name by callinglistVirtualMachines
. Then proceed to call DestroyVM in all situations.Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: