'Container runtime network not ready: cni config uninitialized

I'm installing kubernetes(kubeadm) on centos VM running inside Virtualbox, so with yum I installed kubeadm, kubelet and docker.

Now while trying to setup cluster with kubeadm init --pod-network-cidr=192.168.56.0/24 --apiserver-advertise-address=192.168.56.33/32 i run into the following error :

Unable to update cni config: No networks found in /etc/cni/net.d

Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

So I checked, no cni folder in /etc even that kubernetes-cni-0.6.0-0.x86_64 is installed. I Tried commenting KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf but it didn't work.

PS:

  • I'm installing behind proxy.

  • I have multiple network adapters:

    • NAT : 10.0.2.15/24 for Internet

    • Host Only : 192.168.56.33/32

    • And docker interface : 172.17.0.1/16

Docker version: 17.12.1-ce
kubectl version : Major:"1", Minor:"9", GitVersion:"v1.9.3"
Centos 7



Solution 1:[1]

Add pod network add-on

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

If flannel doesn't work, then try calico -

curl https://docs.projectcalico.org/manifests/calico-typha.yaml -o calico.yaml

kubectl apply -f calico.yaml

Solution 2:[2]

There are several points to remember when setting up the cluster with "kubeadm init" and it is clearly documented on the Kubernetes site kubeadm cluster create:

  • "kubeadm reset" if you have already created a previous cluster
  • Remove the ".kube" folder from the home or root directory
  • (Also stopping the kubelet with systemctl will allow for a smooth setup)
  • Disable swap permanently on the machine, especially if you are rebooting your linux system
  • And not to forget, install a pod network add-on according to the instructions provided on the add on site (not Kubernetes site)
  • Follow the post initialization steps given on the command window by kubeadm.

If all these steps are followed correctly then your cluster will run properly.

And don't forget to do the following command to enable scheduling on the created cluster:

kubectl taint nodes --all node-role.kubernetes.io/master-

About how to install from behind proxy you may find this useful:

install using proxy

Solution 3:[3]

Check this answer.

Use this PR (till will be approved):

kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

it's a known issue: coreos/flannel#1044

Solution 4:[4]

I could not see the helm server version:

$ helm version --tiller-namespace digital-ocean-namespace
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find a ready tiller pod

The kubectl describe node kubernetes-master --namespace digital-ocean-namespace command was showing the message:

NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

The nodes was not ready:

$ kubectl get node --namespace digital-ocean-namespace
NAME                  STATUS     ROLES    AGE   VERSION
kubernetes-master     NotReady   master   82m   v1.14.1
kubernetes-worker-1   NotReady   <none>   81m   v1.14.1

I had a version compatibility issue between Kubernetes and the flannel network.

My k8s version was 1.14 as seen in the command:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

After re-installing the flannel network with the command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

I could then see the helm server version:

$ helm version --tiller-namespace digital-ocean-namespace
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

Solution 5:[5]

Resolved this issue by installing Calico CNI plugin using following commands:

curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml

Solution 6:[6]

It was a proxy error as mentionned in Github https://github.com/kubernetes/kubernetes/issues/34695

They suggested to use kubeadm init --use-kubernetes-version v1.4.1 but i change my network entirely (no proxy) and i manage to setup my cluster.

After that we can setup pod network with kubectl apply -f ... see https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

Solution 7:[7]

I solved this by Installing a Pod network add-o, I used Flannel pod network which is a very simple overlay network that satisfies the kubernetes requirements

you can do it with this command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

You can read more about this in the kubernetes documentation

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

Solution 8:[8]

I face the same errors and it seens that It seens that systemd have a problems. I'm not remember my last systemd version. But update it solve the problems for me.

Solution 9:[9]

I faced same errors, I was seeing the issue after slave node joined to cluster. Slave node was showing status 'Not ready' after joining.

I checked kubectl describe node ksalve and observed the mentioned issue. After digging deeper I found that systemd was different in master and slave node. In master I have configured systemd however slave has default cfgroup only.

Once I removed the systemd from master node, slave status immediately changed to Ready.

Solution 10:[10]

My problem was that I was updating the hostname after the cluster was created. By doing that, it's like the master didn't know it was the master.

I am still running:

sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname) [1][2]

but now I run it before the cluster initialization

Error that lead me to this from running sudo journalctl -u kubelet:

Unable to register node "ip-10-126-121-125.ec2.internal" with API server: nodes "ip-10-126-121-125.ec2.internal" is forbidden: node "ip-10-126-121-125" cannot modify node "ip-10-126-121-125.ec2.internal"

Solution 11:[11]

This is for AWS VPC CNI

  1. Step1 : kubectl get mutatingwebhookconfigurations -oyaml > mutating.txt

  2. Step 2: Kubectl delete -f mutating.txt

  3. Step3: Restart the node

  4. Step4 : You should be able to see the node is ready

  5. Step5: Install the mutatingwebhookconfiguration back

Solution 12:[12]

In my case, it is because I forgot to open the 8285 port. 8285 port is used by flannel you need to open it from the firewall.

e.g:
if you use flannel addon and your OS is centOS:

firewall-cmd --permanent --add-port=8285/tcp 
firewall-cmd --reload