'kubelet does not have ClusterDNS IP configured in Microk8s
I'm using microk8s
in ubuntu
I'm trying to run a simple hello world program but I got the error when pod
created.
kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy
Here is my deployment.yaml file which I'm trying to apply.
apiVersion: v1
kind: Service
metadata:
name: grpc-hello
spec:
ports:
- port: 80
targetPort: 9000
protocol: TCP
name: http
selector:
app: grpc-hello
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-hello
spec:
replicas: 1
selector:
matchLabels:
app: grpc-hello
template:
metadata:
labels:
app: grpc-hello
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--backend=grpc://127.0.0.1:50051",
"--service=hellogrpc.endpoints.octa-test-123.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 9000
- name: python-grpc-hello
image: gcr.io/octa-test-123/python-grpc-hello:1.0
ports:
- containerPort: 50051
Here is what I got when I try to describe
the pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
Normal Started 30s kubelet, azeem-ubuntu Started container python-grpc-hello
Normal Pulled 30s kubelet, azeem-ubuntu Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
Normal Created 30s kubelet, azeem-ubuntu Created container python-grpc-hello
Normal Pulled 12s (x3 over 31s) kubelet, azeem-ubuntu Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
Normal Created 12s (x3 over 31s) kubelet, azeem-ubuntu Created container esp
Normal Started 12s (x3 over 30s) kubelet, azeem-ubuntu Started container esp
Warning MissingClusterDNS 8s (x10 over 31s) kubelet, azeem-ubuntu pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning BackOff 8s (x2 over 23s) kubelet, azeem-ubuntu Back-off restarting failed container
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
Normal Started 30s kubelet, azeem-ubuntu Started container python-grpc-hello
Normal Pulled 30s kubelet, azeem-ubuntu Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
Normal Created 30s kubelet, azeem-ubuntu Created container python-grpc-hello
Normal Pulled 12s (x3 over 31s) kubelet, azeem-ubuntu Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
Normal Created 12s (x3 over 31s) kubelet, azeem-ubuntu Created container esp
Normal Started 12s (x3 over 30s) kubelet, azeem-ubuntu Started container esp
Warning MissingClusterDNS 8s (x10 over 31s) kubelet, azeem-ubuntu pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning BackOff 8s (x2 over 23s) kubelet, azeem-ubuntu Back-off restarting failed container
I search alot about this I find some answers but no one is working for me I also create the kube-dns
for this but don't know why this still is not working. These kube-dns are running. kube-dns are in kube-system
namespace.
NAME READY STATUS RESTARTS AGE
kube-dns-6dbd676f7-dfbjq 3/3 Running 0 22m
And here is what I apply to create the kube-dns
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.152.183.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
upstreamNameservers: |-
["8.8.8.8", "8.8.4.4"]
# Why set upstream ns: https://github.com/kubernetes/minikube/issues/2027
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: gcr.io/google-containers/k8s-dns-kube-dns:1.15.8
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google-containers/k8s-dns-dnsmasq-nanny:1.15.8
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google-containers/k8s-dns-sidecar:1.15.8
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns
Please let me know what I'm missing.
Solution 1:[1]
You have not specified how you deployed kube dns but with microk8s its recommended to use core dns. You should not deploy kube dns or core dns on your own rather you need to enable dns using this command microk8s enable dns
which would deploy core DNS and setup DNS.
Solution 2:[2]
I had the same problem with a microk8s cluster. Despite I had already enabled the dns add-on it don't worked. I looked for the service kube-dns's cluster-ip address with
kubectl -nkube-system get svc/kube-dns
I stopped the microk8s cluster and i edited the kubelet configuration file /var/snap/microk8s/current/args/kubelet and i added the following lines in my case:
--resolv-conf=""
--cluster-dns=A.B.C.D
--cluster-domain=cluster.local
Afterward, started the cluster and the problem don't ocurred again.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Jonathan Lin |
Solution 2 | Leonel |