Tasks

Kubernetes v1.13 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Edit This Page

Upgrading kubeadm HA clusters from v1.11 to v1.12

This page explains how to upgrade a highly available (HA) Kubernetes cluster created with kubeadm from version 1.11.x to version 1.12.x. In addition to upgrading, you must also follow the instructions in Creating HA clusters with kubeadm.

Before you begin

Before proceeding:

Note: All commands on any control plane or etcd node should be run as root.

Prepare for both methods

Upgrade kubeadm to the version that matches the version of Kubernetes that you are upgrading to:

apt-mark unhold kubeadm && \
apt-get update && apt-get upgrade -y kubeadm && \
apt-mark hold kubeadm

Check prerequisites and determine the upgrade versions:

kubeadm upgrade plan

You should see something like the following:

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.11.3   v1.12.0
Controller Manager   v1.11.3   v1.12.0
Scheduler            v1.11.3   v1.12.0
Kube Proxy           v1.11.3   v1.12.0
CoreDNS              1.1.3     1.2.2
Etcd                 3.2.18    3.2.24

Stacked control plane nodes

Upgrade the first control plane node

Modify configmap/kubeadm-config for this control plane node:

kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml

Open the file in an editor and replace the following values:

You must also pass an additional argument (initial-cluster-state: existing) to etcd.local.extraArgs.

kubectl apply -f kubeadm-config-cm.yaml --force

Start the upgrade:

kubeadm upgrade apply v<YOUR-CHOSEN-VERSION-HERE>

You should see something like the following:

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.0". Enjoy!

The kubeadm-config ConfigMap is now updated from v1alpha2 version to v1alpha3.

Upgrading additional control plane nodes

Each additional control plane node requires modifications that are different from the first control plane node. Run:

kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml

Open the file in an editor and replace the following values for ClusterConfiguration:

You must also modify the ClusterStatus to add a mapping for the current host under apiEndpoints.

Add an annotation for the cri-socket to the current node, for example to use Docker:

kubectl annotate node <nodename> kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock

Apply the modified kubeadm-config on the node:

kubectl apply -f kubeadm-config-cm.yaml --force

Start the upgrade:

kubeadm upgrade apply v<YOUR-CHOSEN-VERSION-HERE>

You should see something like the following:

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.0". Enjoy!

External etcd

Upgrade each control plane

Get a copy of the kubeadm config used to create this cluster. The config should be the same for every node. The config must exist on every control plane node before the upgrade begins.

# on each control plane node
kubectl get configmap -n kube-system kubeadm-config -o jsonpath={.data.MasterConfiguration} > kubeadm-config.yaml

Open the file in an editor and set api.advertiseAddress to the local node’s IP address.

Now run the upgrade on each control plane node one at a time.

kubeadm upgrade apply v1.12.0 --config kubeadm-config.yaml

Upgrade etcd

Kubernetes v1.11 to v1.12 only changed the patch version of etcd from v3.2.18 to v3.2.24. This is a rolling upgrade with no downtime, because you can run both versions in the same cluster.

On the first host, modify the etcd manifest:

sed -i 's/3.2.18/3.2.24/' /etc/kubernetes/manifests/etcd.yaml

Wait for the etcd process to reconnect. There will be error warnings in the other etcd node logs. This is expected.

Repeat this step on the other etcd hosts.

Next steps

Manually upgrade your CNI provider

Your Container Network Interface (CNI) provider might have its own upgrade instructions to follow. Check the addons page to find your CNI provider and see whether you need to take additional upgrade steps.

Update kubelet and kubectl packages

Upgrade the kubelet and kubectl by running the following on each node:

# use your distro's package manager, e.g. 'apt-get' on Debian-based systems
# for the versions stick to kubeadm's output (see above)
apt-mark unhold kubelet kubectl && \
apt-get update && \
apt-get install kubelet=<NEW-K8S-VERSION> kubectl=<NEW-K8S-VERSION> && \
apt-mark hold kubelet kubectl && \
systemctl restart kubelet

In this example a deb-based system is assumed and apt-get is used for installing the upgraded software. On rpm-based systems the command is yum install <PACKAGE>=<NEW-K8S-VERSION> for all packages.

Verify that the new version of the kubelet is running:

systemctl status kubelet

Verify that the upgraded node is available again by running the following command from wherever you run kubectl:

kubectl get nodes

If the STATUS column shows Ready for the upgraded host, you can continue. You might need to repeat the command until the node shows Ready.

If something goes wrong

If the upgrade fails, see whether one of the following scenarios applies:

You can run kubeadm upgrade apply again, because it is idempotent and should eventually make sure the actual state is the desired state you are declaring. You can run kubeadm upgrade apply to change a running cluster with x.x.x --> x.x.x with --force to recover from a bad state.

Feedback