'Renew kubernetes pki after expired

My kubernetes PKI expired (API server to be exact) and I can't find a way to renew it. The error I get is

May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922595    8751 server.go:417] Version: v1.14.2
May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922784    8751 plugins.go:103] No cloud provider specified.
May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922800    8751 server.go:754] Client rotation is on, will bootstrap in background
May 27 08:43:51 node1 kubelet[8751]: E0527 08:43:51.925859    8751 bootstrap.go:264] Part of the existing bootstrap client certificate is expired: 2019-05-24 13:24:42 +0000 UTC
May 27 08:43:51 node1 kubelet[8751]: F0527 08:43:51.925894    8751 server.go:265] failed to run Kubelet: unable to load bootstrap
kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory

The documentation on https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/ describes how to renew but it only works if the API server is not expired. I have tried to do a

kubeadm alpha cert renew all

and do a reboot but that just made the entire cluster fail so I did a rollback to a snapshot (my cluster is running on VMware).

The cluster is running and all containers seem to work but I can't access it via kubectl so I can't really deploy or query.

My kubernetes version is 1.14.2.



Solution 1:[1]

So the solution was to (first a backup)

$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/
$ kubeadm init phase certs all --apiserver-advertise-address <IP>
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/
$ kubeadm init phase kubeconfig all
$ reboot

then

$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

that did the job for me and thanks for your hints :)

Solution 2:[2]

This topic is also discussed in:


Kubernetes v1.15 provides docs for "Certificate Management with kubeadm":

kubeadm alpha certs check-expiration
  • Automatic certificate renewal:
    • kubeadm renews all the certificates during control plane upgrade.
  • Manual certificate renewal:
    • You can renew your certificates manually at any time with the kubeadm alpha certs renew command.
    • This command performs the renewal using CA (or front-proxy-CA) certificate and key stored in /etc/kubernetes/pki.

Overall for Kubernetes v1.14 I find this procedure the most helpful:

Solution 3:[3]

Try to do cert renewal via kubeadm init phase certs command.

You can check certs expiration via the following command:

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text

openssl x509 -in /etc/kubernetes/pki/apiserver-kubelet-client.crt -noout -text

First, ensure that you have most recent backup of k8s certificates inventory /etc/kubernetes/pki/*.

Delete apiserver.* and apiserver-kubelet-client.* cert files in /etc/kubernetes/pki/ directory.

Spawn a new certificates via kubeadm init phase certs command:

sudo kubeadm init phase certs apiserver

sudo kubeadm init phase certs apiserver-kubelet-client

Restart kubelet and docker daemons:

sudo systemctl restart docker; sudo systemctl restart kubelet

You can find more related information in the official K8s documentation.

Solution 4:[4]

I was using Kubernetes v15.1 and updated my certificates as explained above, but I still got the same error. The /etc/kubernetes/kubelet.conf was still referring to the expired/old "client-certificate-data".

After some research I found out that kubeadm is not updating the /etc/kubernetes/kubelet.conf file if the certificate-renew was not set to true. So please be aware of a bug of kubeadm below version 1.17 (https://github.com/kubernetes/kubeadm/issues/1753).

kubeadm only upgrades if the cluster upgrade was done with certificate-renewal=true. So I manually had to delete the /etc/kubernetes/kubelet.conf and regenerated it with kubeadm init phase kubeconfig kubelet which finally fixed my problem.

Solution 5:[5]

This will update all certs under /etc/kubernetes/ssl

kubeadm alpha certs renew all --config=/etc/kubernetes/kubeadm-config.yaml

and do this to restart server commpenont:

kill -s SIGHUP $(pidof kube-apiserver)
kill -s SIGHUP $(pidof kube-controller-manager)
kill -s SIGHUP $(pidof kube-scheduler)

Solution 6:[6]

[root@nrchbs-slp4115 ~]# kubectl get apiservices |egrep metrics
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        125m


[root@nrchbs-slp4115 ~]# kubectl get svc -n kube-system
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   20d
metrics-server   ClusterIP   10.99.2.11   <none>        443/TCP                  125m


[root@nrchbs-slp4115 ~]# kubectl get ep -n kube-system
NAME                      ENDPOINTS                                               AGE
kube-controller-manager   <none>                                                  20d
kube-dns                  10.244.0.5:53,10.244.0.6:53,10.244.0.5:53 + 3 more...   20d
kube-scheduler            <none>                                                  20d
metrics-server            10.244.2.97:443                                         125m
[root@nrchbs-slp4115 ~]#

Solution 7:[7]

To help anyone else with Multi-Master setup as I was searching for the answer after the first master has been updated on the second master I did this I found this from another question:

kubeadm only upgrades if the cluster upgrade was done with certificate-renewal=true. So I manually had to delete the /etc/kubernetes/kubelet.conf and regenerated it with kubeadm init phase kubeconfig kubelet which finally fixed my problem.

Solution 8:[8]

I use a config.yaml to configure the Masters so for me, the answer was:

sudo -i
mkdir -p ~/k8s_backup/etcd
cd /etc/kubernetes/pki/
mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/k8s_backup
cd /etc/kubernetes/pki/etcd
mv {healthcheck-client.crt,healthcheck-client.key,peer.crt,peer.key,server.crt,server.key} ~/k8s_backup/etcd/
kubeadm init phase certs all --ignore-preflight-errors=all --config /etc/kubernetes/config.yaml

cd /etc/kubernetes
mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/k8s_backup
kubeadm init phase kubeconfig all --config /etc/kubernetes/config.yaml --ignore-preflight-errors=all

For good measure I reboot

shutdown now -r

Solution 9:[9]

I am able to solve this issue with below steps:

openssl req -new -key <existing.key> -sub "/CN=system:node:<HOST_NAME>/O=system:nodes" -out new.csr
  • <existing.key> - use it from kubelet.conf
  • <HOST_NAME> - hostname in lower case ( refer expired certificate with command openssl ex: openssl x509 -in old.pem -text -noout)
openssl x509 -req -in new.csr -CA <ca.crt> -CAkey <ca.key> -CAcreateserial -out new.crt -days 365
  • <ca.crt> - from master node
  • <ca.key> - from
  • new.crt - is the renewed certificate replace with expired certificate.

Once replaced , restart kubelet and docker or any other container service.

Solution 10:[10]

kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) >/etc/kubernetes/kubelet.conf

Renew the kubelet.conf helps me solve this issue.

Solution 11:[11]

The best solution is as Kim Nielsen wrote.

$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/
$ kubeadm init phase certs all --apiserver-advertise-address <IP>
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/
$ kubeadm init phase kubeconfig all
$ reboot

With the next command, you can check when the new certifications will be expired:

$ kubeadm alpha certs check-expiration --config=/etc/kubernetes/kubeadm-config.yaml 

or

$ kubeadm certs check-expiration --config=/etc/kubernetes/kubeadm-config.yaml

However, if you have more than one master, you need to copy these files on them.

Log in to the second master and do (backup):

$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/

Then log in to the first master (where you created new certificates) and do the next commands (node2 should be changed with IP address of second master machine):

$ rsync /etc/kubernetes/pki/*.crt -e ssh root@node2:/etc/kubernetes/pki/
$ rsync /etc/kubernetes/pki/*.key -e ssh root@node2:/etc/kubernetes/pki/
$ rsync /etc/kubernetes/*.conf -e ssh root@node2:/etc/kubernetes/

Solution 12:[12]

In my case, I've got same issue after certificate renewal. My cluster was built using Kubespray. My kubelet stopped working and it was saying that I do not have /etc/kubernetes/bootstrap-kubelet.conf file so I looked what this config does.

--bootstrap-kubeconfig string
  | Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by --kubeconfig does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated client certificate and key is written to the path specified by --kubeconfig. The client certificate and key file will be stored in the directory pointed by --cert-dir.

I understood that this file might not be needed.


A note that I renewed k8s 1.19 with:

kubeadm alpha certs renew apiserver-kubelet-client
kubeadm alpha certs renew apiserver
kubeadm alpha certs renew front-proxy-client

... and that was not sufficient.


The solution was

cp -r /etc/kubernetes /etc/kubernetes.backup

kubeadm alpha kubeconfig user --client-name system:kube-controller-manager > /etc/kubernetes/controller-manager.conf
kubeadm alpha kubeconfig user --client-name system:kube-scheduler > /etc/kubernetes/scheduler.conf
kubeadm alpha kubeconfig user --client-name system:node:YOUR_MASTER_HOSTNAME_IS_HERE --org system:nodes > /etc/kubernetes/kubelet.conf

kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 yee379
Solution 2
Solution 3 Nick_Kh
Solution 4 gho
Solution 5 NOZUONOHIGH
Solution 6 sanjay singh
Solution 7 cjm888
Solution 8 Max
Solution 9 Narann
Solution 10 KeithTt
Solution 11
Solution 12 laimison