任务

Kubernetes v1.12 版本的文档已不再维护。您现在看到的版本来自于一份静态的快照。如需查阅最新文档,请点击 最新版本。

Edit This Page

将 kubeadm 集群从 v1.11 升级到 v1.12

本页介绍了如何将 kubeadm 创建的 Kubernetes 集群从 1.11.x 版本升级到 1.12.x 版本,以及从版本 1.12.x 升级到 1.12.y ,其中 y > x

准备开始

附加信息

升级控制平面

  1. 在主节点上,升级 kubeadm:
apt-get update
apt-get upgrade -y kubeadm
yum upgrade -y kubeadm --disableexcludes=kubernetes

  1. 验证下载是否有效并且是预期的版本

    kubeadm version
  2. 在主节点上,运行:

    kubeadm upgrade plan

    您应该可以看到与下面类似的输出:

    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.11.3
    [upgrade/versions] kubeadm version: v1.12.0
    [upgrade/versions] Latest stable version: v1.11.3
    [upgrade/versions] Latest version in the v1.11 series: v1.11.3
    [upgrade/versions] Latest experimental version: v1.13.0-alpha.0
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     2 x v1.11.1   v1.12.0
                1 x v1.11.3   v1.12.0
    
    Upgrade to the latest experimental version:
    
    COMPONENT            CURRENT   AVAILABLE
    API Server           v1.11.3   v1.12.0
    Controller Manager   v1.11.3   v1.12.0
    Scheduler            v1.11.3   v1.12.0
    Kube Proxy           v1.11.3   v1.12.0
    CoreDNS              1.1.3     1.2.2
    Etcd                 3.2.18    3.2.24
    
    You can now apply the upgrade by executing the following command:
    
        kubeadm upgrade apply v1.12.0 
    
    _____________________________________________________________________

    此命令检查您的集群是否可以升级,并可以获取到升级的版本。

  3. 选择要升级到的版本,然后运行相应的命令。 例如:

    kubeadm upgrade apply v1.12.0

    您应该可以看见与下面类似的输出:

    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
    [upgrade/version] You have chosen to change the cluster version to "v1.12.0"
    [upgrade/versions] Cluster version: v1.11.3
    [upgrade/versions] kubeadm version: v1.12.0
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/prepull] Prepulling image for component etcd.
    [upgrade/prepull] Prepulling image for component kube-apiserver.
    [upgrade/prepull] Prepulling image for component kube-controller-manager.
    [upgrade/prepull] Prepulling image for component kube-scheduler.
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
    [upgrade/prepull] Prepulled image for component kube-apiserver.
    [upgrade/prepull] Prepulled image for component kube-controller-manager.
    [upgrade/prepull] Prepulled image for component kube-scheduler.
    [upgrade/prepull] Prepulled image for component etcd.
    [upgrade/prepull] Successfully prepulled the images for all the control plane components
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.0"...
    Static pod: kube-apiserver-ip-172-31-80-76 hash: d9b7af93990d702b3ee9a2beca93384b
    Static pod: kube-controller-manager-ip-172-31-80-76 hash: 44a081fb5d26e90773ceb98b4e16fe10
    Static pod: kube-scheduler-ip-172-31-80-76 hash: 009228e74aef4d7babd7968782118d5e
    Static pod: etcd-ip-172-31-80-76 hash: 997fcf3d8d974c98abc14556cc02617e
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests661777755/etcd.yaml"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-09-19-18-58-14/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
    Static pod: etcd-ip-172-31-80-76 hash: 997fcf3d8d974c98abc14556cc02617e
    <snip>
    [apiclient] Found 1 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    [util/etcd] Waiting 0s for initial delay
    [util/etcd] Attempting to see if all cluster endpoints are available 1/10
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests661777755"
    [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests661777755/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests661777755/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests661777755/kube-scheduler.yaml"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-09-19-18-58-14/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
    <snip>
    Static pod: kube-apiserver-ip-172-31-80-76 hash: 854a5a8468f899093c6a967bb81dcfbc
    [apiclient] Found 1 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-09-19-18-58-14/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
    Static pod: kube-controller-manager-ip-172-31-80-76 hash: 44a081fb5d26e90773ceb98b4e16fe10
    Static pod: kube-controller-manager-ip-172-31-80-76 hash: b651f83474ae70031d5fb2cab73bd366
    [apiclient] Found 1 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-09-19-18-58-14/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
    Static pod: kube-scheduler-ip-172-31-80-76 hash: 009228e74aef4d7babd7968782118d5e
    Static pod: kube-scheduler-ip-172-31-80-76 hash: da406e5a49adfbbeb90fe2a0cf8fd8d1
    [apiclient] Found 1 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ip-172-31-80-76" as an annotation
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.0". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
  4. 手动升级软件定义网络(SDN)。

    您的容器网络接口(CNI)应该提供了程序自身的升级说明。 检查 addons 页面以 查找您 CNI 所提供的程序,并查看是否需要其他升级步骤。

升级主节点和其他节点的软件包

  1. 准备为每个节点进行维护,将其标记为不可调度并移出工作负载:

    kubectl drain $NODE --ignore-daemonsets

    在 master 节点上,您必须增加 --ignore-daemonsets

    kubectl drain ip-172-31-85-18
    node "ip-172-31-85-18" cordoned
    error: unable to drain node "ip-172-31-85-18", aborting command...
    
    There are pending nodes to be drained:
    ip-172-31-85-18
    error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calico-node-5798d, kube-proxy-thjp9
    kubectl drain ip-172-31-85-18 --ignore-daemonsets
    node "ip-172-31-85-18" already cordoned
    WARNING: Ignoring DaemonSet-managed pods: calico-node-5798d, kube-proxy-thjp9
    node "ip-172-31-85-18" drained
    
  2. 通过运行适用于您的 Linux 发行版包管理器,在每个 $NODE 节点上升级 Kubernetes 软件包版本:

apt-get update
apt-get upgrade -y kubelet kubeadm
yum upgrade -y kubelet kubeadm --disableexcludes=kubernetes

在每个节点上升级 kubelet

  1. 在除主节点之外的每个节点上,升级 kubelet 配置:

    sudo kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)
  2. 重启 kubelet 进程:

    sudo systemctl restart kubelet
  3. 验证新版本的 kubelet 已经运行到了各个节点上。

    systemctl status kubelet
  4. 通过将节点标记为可调度,让节点重新上线:

    kubectl uncordon $NODE
  5. 在所有节点上升级 kubelet 之后,通过以下命令验证所有的节点是否依旧可用,使得 kubectl 可以访问整个集群:

    kubectl get nodes

    STATUS 列应显示所有节点为 Ready 状态,并且版本号已经被更新。

从故障状态恢复

如果 kubeadm upgrade 失败并且没有回滚,例如由于执行期间意外关闭,您可以再次运行 kubeadm upgrade。 此命令是幂等的,并最终确保实际状态是您声明的所需状态。 要从故障状态恢复,您还可以运行 kubeadm upgrade --force 而不去更改集群正在运行的版本。

它是怎么运作的

kubeadm upgrade apply 做了以下工作:

反馈