-
Notifications
You must be signed in to change notification settings - Fork 602
✨feat(awsmachinepool): custom lifecyclehooks for machinepools #4875
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Welcome @sebltm! |
Hi @sebltm. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I have two requests before getting to the review:
|
/assign |
@AndiDog sorry I hadn't cleaned up the PR, I didn't know if it would get some traction :) |
@AndiDog let me know if this looks good or if there's anything else I should take a look at :) |
The PR is definitely reviewable now. I'm not much experienced with lifecycle hooks and aws-node-termination-handler (is that your actual use case?). Maybe MachinePool machines (#4527) give us a good way to detect node shutdown and have CAPI/CAPA take care of it? Or in other words: I'm not fully confident reviewing here with my knowledge, but maybe others have a better clue – please feel free to ping or discuss in Slack ( |
/ok-to-test |
Taken and rebased from unfinished PR kubernetes-sigs#4875 at commit 2421ec3 Co-authored-by: Andreas Sommer <[email protected]>
@fiunchinho I’ve had to rebase to fix some merge conflicts from main, could I get a review again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
As @AndiDog is in parental leave, could you take a look @nrb @richardcase @Ankitasw @dlipovetsky ? |
/test pull-cluster-api-provider-aws-e2e |
Taken and rebased from unfinished PR kubernetes-sigs#4875 at commit 2421ec3 Co-authored-by: Andreas Sommer <[email protected]>
/test pull-cluster-api-provider-aws-e2e-eks |
LGTM label has been added. Git tree hash: da010d0e67c08c65fe2575731e6271adca0fc381
|
/assign @nrb @richardcase @Ankitasw @dlipovetsky |
/test pull-cluster-api-provider-aws-e2e |
@sebltm: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
/retest |
/approve This will probably introduce merge conflicts on #5447 due to the validation webhook changes. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: nrb The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR adds to the v1beta2 definition for the
AWSMachinePool
andAWSManagedMachinePool
with a new fieldlifecycleHooks
which is a list of:The matching webhooks are updated to validate the lifecycle hooks as they are added to the Custom Resource.
The matching reconcilers are updated to enable reconciling those lifecycle hooks: if the lifecycle hook is present in the Custom Resource but not in the cloud, it is created. And if there is a lifecycle hook present in the cloud but not declared in the Custom Resource then it is removed.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #4020
AWS supports Lifecycle Hooks before/after performing certain actions on an ASG. For example, before scaling in (removing) a node, the ASG can publish an event in an SQS queue which can them be consumed by the node-termination-handler to ensure its proper removal from Kubernetes (it will cordon, drain the node and wait for a period of time for applications to be removed before allowing the Autoscaling Group to terminate the instance).
This allows Kubernetes or other components to be aware of the node's lifecycle and take appropriate actions
Special notes for your reviewer:
Checklist:
Release note: