-
Notifications
You must be signed in to change notification settings - Fork 602
Cluster with reused name in different namespace fails to boot #969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The cluster name needs to be unique per <AWS account, region>. |
Currently we treat the cluster name as a unique value. We have a few options here:
|
@ncdc that limitation only exists because of the current design, but we do need to have a unique "cluster name" that we set that is used for the integrated or external cloud-provider integration. |
@detiber which is still per account+region, unless I'm mistaken? |
@ncdc correct |
I really like the 3rd option @detiber suggested, which might be part of a larger naming refactor. AWS resources are notoriously limited in how many chars they can use, it'd be great if we could come up with a consistent naming scheme that can be used across all resources and solve this issue as well. |
/priority important-soon |
/assign |
/remove-lifecycle active |
Going through open unassigned issues in the v0.5.0 milestone. We have a decent amount of work left to do on CAPI features (control plane, clusterctl, etc). While this is an unfortunately ugly bug, I think we need to defer it to v0.5. /milestone Next |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
/remove-lifecycle frozen |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind bug
What steps did you take and what happened:
What did you expect to happen:
A new VPC, set of machines, and associated resources should have been booted
Anything else you would like to add:
Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: