-
Notifications
You must be signed in to change notification settings - Fork 40.5k
graceful node shutdown restarts pods during shutdown #100184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@yvespp: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig node |
What version of k8s did you use here for the test? Did the version include #98005? That PR ensured that node is marked not ready as soon as the shutdown is sent. The PR was only applied on master branch (i.e. 1.21). It was backported to 1.20 in #99254 (but that PR was only merged yesterday and isn't in any released minor version yet). Edit: I see you mentioned you were on |
/cc @wzshiming |
It's with 1.20.4 |
Original discussion was here kubernetes/website#26963 but I created this new issue because it has nothing to do with the website/docs.
@bobbypage here is the new issue, thanks for your help!
What happened:
Whit GracefulNodeShutdown enabled when I stop a node pods on it get deleted but then get startet on the same node again.
Maybe it's because the node is not marked as
NotReady
before pods are deleted.What you expected to happen:
When the node is shut down Pods should be deleted and then scheduled to another node or remain in pending state.
When node is started again Pods should be scheduled to that node again.
How to reproduce it (as minimally and precisely as possible):
I tested this on a cluster with 2 worker and 3 cp nodes.
Enable GracefulNodeShutdown in kubelet config
/var/lib/kubelet/config.yaml
:Create a deployment and/or daemonset set like this:
Shut down the a node via
systemctl poweroff
or another way that triggers the systemd inhibitor locks and observe the pods on that node with kubectl:Before shutdown start of node yp-test2-worker-0dd0512820d9:
Just after shutdown start. Pods get delete but immediately get started again. Node is still ready:
Now node is not ready:
Containers are running again:
Node fully powered off, it stays like this:
Node started again:
Environment:
kubectl version
): 1.20.4cat /etc/os-release
): Ubuntu 20.04.2 LTSuname -a
): 5.4.0-66-genericThe text was updated successfully, but these errors were encountered: