Home

Published

- 2 min read

How to reboot a Kubernetes node

Hero image

Photo by william william on Unsplash


Commands

Open a shell on the node you want to reboot and run the following commands:

# Get node name
NODE_NAME="$(hostname)"
# Unschedule all pods from this node
kubectl drain $NODE_NAME --ignore-daemonsets
# Reboot
sudo reboot
# Open node for scheduling again
kubectl uncordon $NODE_NAME

Explanation

First, you need to determine the name of the node. The node name is usually the same as the hostname of the server. You can determine the name by running hostname command on the node and checking the list of nodes in your cluster by running kubectl get nodes.

Once you have determined the node name you want to drain the node - this means unscheduling all pods running on the node (they may be re-scheduled to run on another node instead). You can do so by running kubectl drain <node-name> --ignore-daemonsets. This also cordons the node preventing any new pods from being scheduled to run on it for the time being.

At this point, the node is ready to be rebooted or to be shut down. Rebooting requires administrator access, so run sudo reboot. Now the shell will close and it’s time to wait. After a few minutes, you should be able to access the shell again.

After the reboot the node is still cordoned, meaning no pods are scheduled to run on it. To uncordon it, run kubectl uncordon <node-name>.

At this point, you may see pods being scheduled again. You can see which pods run on the node by checking the NODE column in the output of this command: kubectl get pods -A -o wide --sort-by='{.spec.nodeName}'

Further reading

Related Posts

There are no related posts yet. 😢