In a real world scenario, upgrading the K8S cluster to the latest version is very common requirement and it helps us to keep the cluster up-to-date. So, I thought of writing it down the procedure and the steps involved in upgrading the K8S cluster which is created with kubeadm.
Below is a simple architecture indicating our K8S Cluster for this example:
Note — Skipping MINOR version when upgrading is NOT supported. In this example, we are upgrading from v1.20.1 to v1.20.2
As shows in Fig-1, the K8S Cluster consists of One Master Node and two Worker Nodes referred as [K8S Server] and [K8S Worker1 and K8S Worker2] in the picture
The Upgrade workflow at a high level looks below:
- Upgrade the K8S Server — The Master Node and Additional Control Plane modules inside
- Upgrade the K8S Worker1
- Upgrade the K8S Worker2
Note — There is no reason for you to follow the order for Worker Nodes instead you can also pick to upgrade Worker2 and then Worker1 in this example. But ensure to perform worker nodes upgrade one-after-the-other as you don’t want to block your service by bringing all the worker nodes down at the same time. In a real world scenario, make sure you have enough nodes available at any given time to provide the uninterrupted service., so always balance the upgrades vs availability of the service
- Upgrade the Master Node
ssh in to the Master
Once, you were able to log in to the Master Node — Upgrade the kubeadm using below
Check if it is upgraded correctly with —
If successfully upgraded, you’ll see similar message
Drain the Master Node to safely evict all of the pods from the node before we perform the maintenance on the master node. Use the below command to do so.
Plan the upgrade now
Now, upgrade the control plane node [master node]
Next is to upgrade the kubelet and kubectl on the control plane node
Restart the kubelet now
Uncordon the control plane node. That means bringing this node back to online by making it schedulable. Before the upgrade, we made the node ‘drain’ ie., non-schedulable but after the upgrade, we have to make the node schedulable such that it comes back to online for the required actions as part of the K8S Cluster responsibilities
Finally, verify if the control plane node is working
Note — In case if the master node shows ‘NotReady’ status, wait for couple more seconds and re-runt the same command. Repeat the same until you see the status as ‘Ready’ before proceeding further
2. Upgrade the Worker Node 1
Run the below command on the Master Node first to drain the worker node1
Ignore error message as you may not be able to delete certain pods. Move ahead
On the Worker Node1 shell
Upgrade kubeadm
Check the kubeadm version
If successfully upgraded, you’ll see similar message
Upgrade the kubelet configuration on the same node
Upgrade the kubelet and kubectl on the same node
Restart the kubelet
From the Control Plane Node —
3. Upgrade the Worker Node 2
Follow the same guidelines to upgrade the kubeadm to v1.20.2 on this node.
After all the successful upgrade, run the below command on the Control Plane Node
The final output looks like below on the Control Plane Node. In case if any of the node status shows as ‘NotReady’, wait for some more time and re-run the same command until you notice the status as ‘Ready’
Congratulations, You were able to successfully upgrade the K8S Cluster using kubeadm!