'Kubernetes Container runtime network not ready

I installed a Kubernetes cluster of three nodes, the control node looked ok, when I tried to join the other two nodes the status for both of is: Not Ready

On control node:

[root@alva-anstawx01 /]# kubectl get nodes     
NAME                             STATUS     ROLES                  AGE     VERSION
alva-anstawx01.<domain name>   Ready      control-plane,master   7d20h   v1.21.1
alva-anstawx02.<domain name>   NotReady   <none>                 22h     v1.21.1
alva-anstawx03.<domain name>   NotReady   <none>                 22h     v1.21.1

The PODs look Ok and all running:

NAME                                                     READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-c8p97                                 1/1     Running   0          7d20h
coredns-558bd4d5db-hjb8g                                 1/1     Running   0          7d20h
etcd-alva-anstawx01.alv.autoliv.int                      1/1     Running   2          7d20h
kube-apiserver-alva-anstawx01.alv.autoliv.int            1/1     Running   2          7d20h
kube-controller-manager-alva-anstawx01.alv.autoliv.int   1/1     Running   2          7d20h
kube-proxy-b8ft2                                         1/1     Running   0          7d20h
kube-proxy-frr7c                                         1/1     Running   0          23h
kube-proxy-ztxbf                                         1/1     Running   0          23h
kube-scheduler-alva-anstawx01.alv.autoliv.int            1/1     Running   2          7d20h

Checking the logs further it looks something is missing so the CNI plugin starts on those nodes and not sure how to proceed:

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 09 Jun 2021 12:24:34 +0200   Tue, 08 Jun 2021 14:00:45 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 09 Jun 2021 12:24:34 +0200   Tue, 08 Jun 2021 14:00:45 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 09 Jun 2021 12:24:34 +0200   Tue, 08 Jun 2021 14:00:45 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 09 Jun 2021 12:24:34 +0200   Tue, 08 Jun 2021 14:00:45 +0200   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

I have single network interface on each node

On affected node:

Jun 09 12:34:19 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:19.209657    1026 kubelet.go:2211] "Container runtime network not ready" networkReady="N
Jun 09 12:34:19 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:19.698034    1026 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"Sta
Jun 09 12:34:21 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:21.817375    1026 remote_image.go:114] "PullImage from image service failed" err="rpc er
Jun 09 12:34:21 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:21.817429    1026 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code =
Jun 09 12:34:21 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:21.817627    1026 kuberuntime_manager.go:864] container &Container{Name:calico-typha,Ima
Jun 09 12:34:21 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:21.817706    1026 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"Sta
Jun 09 12:34:24 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:24.211195    1026 kubelet.go:2211] "Container runtime network not ready" networkReady="N

I used Calico default configuration and I have single interface on each node:

Control node:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:ac:64:8c brd ff:ff:ff:ff:ff:ff
    inet 10.4.9.73/21 brd 10.4.15.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:feac:648c/64 scope link 
       valid_lft forever preferred_lft forever
3: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 66:c5:7d:06:e5:fe brd ff:ff:ff:ff:ff:ff
    inet 192.168.228.192/32 scope global vxlan.calico
       valid_lft forever preferred_lft forever
    inet6 fe80::64c5:7dff:fe06:e5fe/64 scope link 
       valid_lft forever preferred_lft forever
4: cali5441eeb56bd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
5: cali389c5f98ecc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
6: calicc306a285eb@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

Other nodes:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:ac:67:61 brd ff:ff:ff:ff:ff:ff
    inet 10.4.9.80/21 brd 10.4.15.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:feac:6761/64 scope link 
       valid_lft forever preferred_lft forever

Can any one help me out please how to initialize calico on the other two nodes ?

Edit:

I have resolved an issue with /var space but that didn't help:

[root@alva-anstawx03 ~]# df -kh
Filesystem                     Size  Used Avail Use% Mounted on
devtmpfs                       1.9G     0  1.9G   0% /dev
tmpfs                          1.9G     0  1.9G   0% /dev/shm
tmpfs                          1.9G   60M  1.8G   4% /run
tmpfs                          1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/vg_system-lv_root  9.8G  2.1G  7.2G  23% /
/dev/sda1                      976M  206M  704M  23% /boot
/dev/mapper/vg_system-lv_var    19G  1.1G   17G   6% /var
/dev/mapper/vg_system-lv_opt   3.9G   72M  3.6G   2% /opt
tmpfs                          1.9G   12K  1.9G   1% /var/lib/kubelet/pods/77b1f7f0-8598-4640-af2f-e960c4c76120/volumes/kubernetes.io~projected/kube-api-access-7xnp8
tmpfs                          1.9G   12K  1.9G   1% /var/lib/kubelet/pods/4398eeeb-0f74-477c-a066-403ecab4abe1/volumes/kubernetes.io~projected/kube-api-access-9bh4m
shm                             64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/56cbc698b06f57937128eadc74cc098c4dfb9f5566e941d7a93baab9695ec22e/shm
shm                             64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/3cb246ac67ca0826ca86f8adb5c5c1b8802c4f96ca330456aea67aec02231f9c/shm
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/56cbc698b06f57937128eadc74cc098c4dfb9f5566e941d7a93baab9695ec22e/rootfs
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/3cb246ac67ca0826ca86f8adb5c5c1b8802c4f96ca330456aea67aec02231f9c/rootfs
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/f3e2bedafb61411951557c6e66d037536240bf25a185e6b3e6da0b6ad0b91a38/rootfs
tmpfs                          378M     0  378M   0% /run/user/0
[root@alva-anstawx03 ~]#

Same on other node:

[root@alva-anstawx02 ~]# df -kh
Filesystem                     Size  Used Avail Use% Mounted on
devtmpfs                       1.9G     0  1.9G   0% /dev
tmpfs                          1.9G     0  1.9G   0% /dev/shm
tmpfs                          1.9G   68M  1.8G   4% /run
tmpfs                          1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/vg_system-lv_root  9.8G  2.1G  7.2G  23% /
/dev/sda1                      976M  206M  704M  23% /boot
/dev/mapper/vg_system-lv_opt   3.9G   72M  3.6G   2% /opt
/dev/mapper/vg_system-lv_var    19G  1.1G   17G   6% /var
tmpfs                          1.9G   12K  1.9G   1% /var/lib/kubelet/pods/31e01070-282a-453b-8e7f-fe0d93e359ec/volumes/kubernetes.io~projected/kube-api-access-4qhqs
tmpfs                          1.9G   12K  1.9G   1% /var/lib/kubelet/pods/4208e857-28e7-4005-bbe1-8bed0b08548b/volumes/kubernetes.io~projected/kube-api-access-bvjhg
shm                             64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/f3b43b5c1e46236e3e01536cff2089c788e0b39e34e43165608dbb2ea9906cb5/shm
shm                             64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/285018acde46e519f9ae74551d06028304ca19ab76813ed1ca43a4b6e617e4f4/shm
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/285018acde46e519f9ae74551d06028304ca19ab76813ed1ca43a4b6e617e4f4/rootfs
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/f3b43b5c1e46236e3e01536cff2089c788e0b39e34e43165608dbb2ea9906cb5/rootfs
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/42aaae5f8c681ffa5fd0bf6ed3fcf4d8447962131459d4592d1bbb73a320edca/rootfs
tmpfs                          378M     0  378M   0% /run/user/0
[root@alva-anstawx02 ~]# 

Below is the output of kubectl:

[root@alva-anstawx01 ~]#  kubectl get node
NAME                             STATUS     ROLES                  AGE   VERSION
alva-anstawx01.<Domain Name>   Ready      control-plane,master   8d    v1.21.1
alva-anstawx02.<Domain Name>   NotReady   <none>                 43h   v1.21.1
alva-anstawx03.<Domain Name>   NotReady   <none>                 43h   v1.21.1
[root@alva-anstawx01 ~]# kubectl describe pod calico-node-dshv9 -n kube-system
Name:                 calico-node-dshv9
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 alva-anstawx03.<Domain Name>/10.4.9.96
Start Time:           Wed, 09 Jun 2021 20:39:52 +0200
Labels:               controller-revision-hash=c54f47b5c
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   10.4.9.96
IPs:
  IP:           10.4.9.96
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  
    Image:         docker.io/calico/cni:v3.19.1
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bh4m (ro)
  install-cni:
    Container ID:  
    Image:         docker.io/calico/cni:v3.19.1
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bh4m (ro)
  flexvol-driver:
    Container ID:   
    Image:          docker.io/calico/pod2daemon-flexvol:v3.19.1
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /host/driver from flexvol-driver-host (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bh4m (ro)
Containers:
  calico-node:
    Container ID:   
    Image:          docker.io/calico/node:v3.19.1
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=1s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      IP:                                 autodetect
      CALICO_IPV4POOL_IPIP:               Always
      CALICO_IPV4POOL_VXLAN:              Never
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_VXLANMTU:                     <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_WIREGUARDMTU:                 <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_HEALTHENABLED:                true
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/ from sysfs (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bh4m (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:  
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  sysfs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/
    HostPathType:  DirectoryOrCreate
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/calico/cni
    HostPathType:  
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks
    HostPathType:  
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  flexvol-driver-host:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
    HostPathType:  DirectoryOrCreate
  kube-api-access-9bh4m:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  13h                     default-scheduler  Successfully assigned kube-system/calico-node-dshv9 to alva-anstawx03.<Domain Name>
  Warning  Failed     13h (x2 over 13h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:51f294c56842e731fa8d7bdf6b9ba39771f69ba4eda28e186461be2662e599df: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Normal   Pulling    13h (x4 over 13h)       kubelet            Pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     13h (x4 over 13h)       kubelet            Error: ErrImagePull
  Warning  Failed     13h (x2 over 13h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     13h (x6 over 13h)       kubelet            Error: ImagePullBackOff
  Normal   BackOff    12h (x61 over 13h)      kubelet            Back-off pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x2 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Normal   Pulling    12h (x4 over 12h)       kubelet            Pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x4 over 12h)       kubelet            Error: ErrImagePull
  Warning  Failed     12h (x2 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:51f294c56842e731fa8d7bdf6b9ba39771f69ba4eda28e186461be2662e599df: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     12h (x6 over 12h)       kubelet            Error: ImagePullBackOff
  Normal   BackOff    12h (x18 over 12h)      kubelet            Back-off pulling image "docker.io/calico/cni:v3.19.1"
  Normal   Pulling    12h (x4 over 12h)       kubelet            Pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x4 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     12h (x4 over 12h)       kubelet            Error: ErrImagePull
  Warning  Failed     12h (x6 over 12h)       kubelet            Error: ImagePullBackOff
  Normal   BackOff    12h (x81 over 12h)      kubelet            Back-off pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x2 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Normal   Pulling    12h (x4 over 12h)       kubelet            Pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x2 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:51f294c56842e731fa8d7bdf6b9ba39771f69ba4eda28e186461be2662e599df: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     12h (x4 over 12h)       kubelet            Error: ErrImagePull
  Warning  Failed     12h (x6 over 12h)       kubelet            Error: ImagePullBackOff
  Normal   BackOff    4m45s (x3192 over 12h)  kubelet            Back-off pulling image "docker.io/calico/cni:v3.19.1"
[root@alva-anstawx01 ~]# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                                     READY   STATUS                  RESTARTS   AGE   IP                NODE                             NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-78d6f96c7b-wb96g                 1/1     Running                 1          13h   192.168.228.198   alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   calico-node-dshv9                                        0/1     Init:ImagePullBackOff   0          13h   10.4.9.96         alva-anstawx03.<Domain Name>   <none>           <none>
kube-system   calico-node-rfrnq                                        0/1     Init:ImagePullBackOff   0          13h   10.4.9.80         alva-anstawx02.<Domain Name>   <none>           <none>
kube-system   calico-node-sl864                                        1/1     Running                 1          13h   10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   coredns-558bd4d5db-c8p97                                 1/1     Running                 2          8d    192.168.228.200   alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   coredns-558bd4d5db-hjb8g                                 1/1     Running                 2          8d    192.168.228.199   alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   etcd-alva-anstawx01.<Domain Name>                      1/1     Running                 4          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   kube-apiserver-alva-anstawx01.<Domain Name>            1/1     Running                 4          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   kube-controller-manager-alva-anstawx01.<Domain Name>   1/1     Running                 4          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   kube-proxy-b8ft2                                         1/1     Running                 2          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   kube-proxy-frr7c                                         1/1     Running                 4          43h   10.4.9.80         alva-anstawx02.<Domain Name>   <none>           <none>
kube-system   kube-proxy-ztxbf                                         1/1     Running                 4          43h   10.4.9.96         alva-anstawx03.<Domain Name>   <none>           <none>
kube-system   kube-scheduler-alva-anstawx01.<Domain Name>            1/1     Running                 4          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>


Solution 1:[1]

After seeing whole log line entry

Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

We're interested in this part:

429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit.

It appeared the issue is not with calico/kubernetes cluster, but with pulling the docker image for it.

As stated by the link above

The rate limits of 100 container image requests per six hours for anonymous usage, and 200 container image requests per six hours for free Docker accounts are now in effect. Image requests exceeding these limits will be denied until the six hour window elapses.

It should download images now if there are no pull-requests from this IP to docker hub. To do so you can delete pods that are responsible for calico network on nodes, since it's a deployment, scheduler will re-create pods for you. To do this, run:

kubectl delete pod calico-node-rfrnq -n kube-system
kubectl delete pod calico-node-dshv9 -n kube-system

If for any reason your nodes still can't pull docker images from docker hub, you have calico pod running on a control-plane. That means you can transfer docker images from control-plane to worker nodes manually.

Docker

On a control plane run following commands:

docker save -o ~/calico-cni.tar calico/cni:v3.19.1

Copy file you got to worker nodes using sftp, scp or any other solution.

On a worker node run:

docker load -i calico-cni.tar

If creation images doesn't start, consider deleting pods on worker nodes to force kube scheduler to reschedule these pods to be created.

Please note, if you go with manual approach, there may be other images to download. E.g. on my worker node there are four calico related images:

docker images | grep calico

calico/node                                  v3.19.1       c4d75af7e098   3 weeks ago     168MB
calico/pod2daemon-flexvol                    v3.19.1       5660150975fb   3 weeks ago     21.7MB
calico/cni                                   v3.19.1       5749e8b276f9   3 weeks ago     146MB
calico/kube-controllers                      v3.19.1       5d3d5ddc8605   3 weeks ago     60.6MB

ContainerD

Export image using ContainerD

ctr image export <output-filename> <image-name>

Example:

ctr image export calico-node-v3.11.2.tar \
docker.io/calico/node:v3.11.2

Copy file you got to worker nodes.

ctr image import <filename-from-previous-step>

Please find syntax for ContainerD

Solution 2:[2]

I got the same problem but it was a little different with blew msg: "no networks found in /etc/cni/net.d"

I fixed it by below actions:

  1. create folder /etc/cni/net.d
  2. copy /etc/cni/net.d/10-flannel.conflist to current failed node
  3. systemctl restart kubelet

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 xiaojueguan