-
Notifications
You must be signed in to change notification settings - Fork 1.4k
remains orphan resources with no machine refer to #1718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
/milestone v0.3.0 |
@xrmzju What version of Cluster API are you using? |
@xrmzju is this still an issue for you? |
Doing some milestone grooming. We can bring this back if we hear back. /milestone Next |
This is gonna be fixed in #1947 |
or at least part of it |
Closing this for now, feel free to reopen if necessary /close |
@vincepri: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What steps did you take and what happened:
we intend to create 3 object in one replica reconcile process.if infraConfig and bootstrapConfig create success and machine create failed, then bootstrapConfig and infraConfig will be left there forever..forgive me miss the delete part in L303, but i do see a lot of bootstrap remains in my env, i doubt the scale down action delete the machine only, but remains the gc controller to delete the infraconfig and bootstrapconfig, but if the machine has not been handled by machine controller, then no ownerRef would be set on infra config and bootstrap config, then the orphan object remains
What did you expect to happen:
scale a machineset with 10 replicas should means 10 infraConfigs,10 bootstrapConfigs and 10 machines, we only ensure 10 machines in our logic.
/kind bug
The text was updated successfully, but these errors were encountered: