'no endpoints available for service \"kubernetes-dashboard\"

I'm trying to follow GitHub - kubernetes/dashboard: General-purpose web UI for Kubernetes clusters.

deploy/access:

# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
# kubectl proxy
Starting to serve on 127.0.0.1:8001

curl:

# curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}# 

Please advise.

per @VKR

$ kubectl get pods --all-namespaces 
NAMESPACE     NAME                                              READY   STATUS              RESTARTS   AGE
kube-system   coredns-576cbf47c7-56vg7                          0/1     ContainerCreating   0          57m
kube-system   coredns-576cbf47c7-sn2fk                          0/1     ContainerCreating   0          57m
kube-system   etcd-wcmisdlin02.uftwf.local                      1/1     Running             0          56m
kube-system   kube-apiserver-wcmisdlin02.uftwf.local            1/1     Running             0          56m
kube-system   kube-controller-manager-wcmisdlin02.uftwf.local   1/1     Running             0          56m
kube-system   kube-proxy-2hhf7                                  1/1     Running             0          6m57s
kube-system   kube-proxy-lzfcx                                  1/1     Running             0          7m35s
kube-system   kube-proxy-rndhm                                  1/1     Running             0          57m
kube-system   kube-scheduler-wcmisdlin02.uftwf.local            1/1     Running             0          56m
kube-system   kubernetes-dashboard-77fd78f978-g2hts             0/1     Pending             0          2m38s
$ 

logs:

$ kubectl logs kubernetes-dashboard-77fd78f978-g2hts -n kube-system
$ 

describe:

$ kubectl describe pod kubernetes-dashboard-77fd78f978-g2hts -n kube-system
Name:               kubernetes-dashboard-77fd78f978-g2hts
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kubernetes-dashboard
                    pod-template-hash=77fd78f978
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
  kubernetes-dashboard:
    Image:      k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    Port:       8443/TCP
    Host Port:  0/TCP
    Args:
      --auto-generate-certificates
    Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-gp4l7 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  kubernetes-dashboard-token-gp4l7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-gp4l7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                      From               Message
  ----     ------            ----                     ----               -------
  Warning  FailedScheduling  4m39s (x21689 over 20h)  default-scheduler  0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
$ 


Solution 1:[1]

It would appear that you are attempting to deploy Kubernetes leveraging kubeadm but have skipped the step of Installing a pod network add-on (CNI). Notice the warning:

The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed. kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).

Once you do this, the CoreDNS pods should come up healthy. This can be verified with: kubectl -n kube-system -l=k8s-app=kube-dns get pods

Then the kubernetes-dashboard pod should come up healthy as well.

Solution 2:[2]

you could refer to https://github.com/kubernetes/dashboard#getting-started

Also, I see "https" in your link Please try this link instead http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Solution 3:[3]

I had the same problem. In the end it turned out as a Calico Network configuration problem. But step by step...

First I checked if the Dashboard Pod was running:

kubectl get pods --all-namespaces

The result for me was:

NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
kube-system            calico-kube-controllers-bcc6f659f-j57l9      1/1     Running            2          19h
kube-system            calico-node-hdxp6                            0/1     CrashLoopBackOff   13         15h
kube-system            calico-node-z6l56                            0/1     Running            68         19h
kube-system            coredns-74ff55c5b-8l6m6                      1/1     Running            2          19h
kube-system            coredns-74ff55c5b-v7pkc                      1/1     Running            2          19h
kube-system            etcd-got-virtualbox                          1/1     Running            3          19h
kube-system            kube-apiserver-got-virtualbox                1/1     Running            3          19h
kube-system            kube-controller-manager-got-virtualbox       1/1     Running            3          19h
kube-system            kube-proxy-q99s5                             1/1     Running            2          19h
kube-system            kube-proxy-vrpcd                             1/1     Running            1          15h
kube-system            kube-scheduler-got-virtualbox                1/1     Running            2          19h
kubernetes-dashboard   dashboard-metrics-scraper-7b59f7d4df-qc9ms   1/1     Running            0          28m
kubernetes-dashboard   kubernetes-dashboard-74d688b6bc-zrdk4        0/1     CrashLoopBackOff   9          28m

The last line indicates, that the dashboard pod could not have been started (status=CrashLoopBackOff). And the 2nd line shows that the calico node has problems. Most likely the root cause is Calico.

Next step is to have a look at the pod log (change namespace / name as listed in YOUR pods list):

kubectl logs kubernetes-dashboard-74d688b6bc-zrdk4 -n kubernetes-dashboard

The result for me was:

2021/03/05 13:01:12 Starting overwatch
2021/03/05 13:01:12 Using namespace: kubernetes-dashboard
2021/03/05 13:01:12 Using in-cluster config to connect to apiserver
2021/03/05 13:01:12 Using secret token for csrf signing
2021/03/05 13:01:12 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout

Hm - not really helpful. After searching for "dial tcp 10.96.0.1:443: i/o timeout" I found this information, where it says ...

If you follow the kubeadm instructions to the letter ... Which means install docker, kubernetes (kubeadm, kubectl, & kubelet), and calico with the Kubeadm hosted instructions ... and your computer nodes have a physical ip address in the range of 192.168.X.X then you will end up with the above mentioned non-working dashboard. This is because the node ip addresses clash with the internal calico ip addresses.

https://github.com/kubernetes/dashboard/issues/1578#issuecomment-329904648

Yes, in deed I do have a physical IP in the range of 192.168.x.x - like many others might have as well. I wish Calico would check this during setup.

So let's move the pod network to a different IP range:

You should use a classless reserved IP range for Private Networks like 10.0.0.0/8 (16.777.216 addresses) 172.16.0.0/12 (1.048.576 addresses) 192.168.0.0/16 (65.536 addresses). Otherwise Calico will terminate with an error saying "Invalid CIDR specified in CALICO_IPV4POOL_CIDR" ...

sudo kubeadm reset
sudo rm /etc/cni/net.d/10-calico.conflist
sudo rm /etc/cni/net.d/calico-kubeconfig

export CALICO_IPV4POOL_CIDR=172.16.0.0
export MASTER_IP=192.168.100.122
sudo kubeadm init --pod-network-cidr=$CALICO_IPV4POOL_CIDR/12 --apiserver-advertise-address=$MASTER_IP --apiserver-cert-extra-sans=$MASTER_IP

mkdir -p $HOME/.kube
sudo rm -f $HOME/.kube/config
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) /etc/kubernetes/kubelet.conf

wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O calico.yaml
sudo sed -i "s/192.168.0.0\/16/$CALICO_IPV4POOL_CIDR\/12/g" calico.yaml
sudo sed -i "s/192.168.0.0/$CALICO_IPV4POOL_CIDR/g" calico.yaml
kubectl apply -f calico.yaml

Now we test if all calico pods are running:

kubectl get pods --all-namespaces

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-bcc6f659f-ns7kz   1/1     Running   0          15m
kube-system   calico-node-htvdv                         1/1     Running   6          15m
kube-system   coredns-74ff55c5b-lqwpd                   1/1     Running   0          17m
kube-system   coredns-74ff55c5b-qzc87                   1/1     Running   0          17m
kube-system   etcd-got-virtualbox                       1/1     Running   0          17m
kube-system   kube-apiserver-got-virtualbox             1/1     Running   0          17m
kube-system   kube-controller-manager-got-virtualbox    1/1     Running   0          18m
kube-system   kube-proxy-6xr5j                          1/1     Running   0          17m
kube-system   kube-scheduler-got-virtualbox             1/1     Running   0          17m

Looks good. If not check CALICO_IPV4POOL_CIDR by editing the node config: KUBE_EDITOR="nano" kubectl edit -n kube-system ds calico-node

Let's apply the kubernetes-dashboard and start the proxy:

export KUBECONFIG=$HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
kubectl proxy

Now I can load http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Solution 4:[4]

i was facing the same issue, so i followed the official docs and then went to https://github.com/kubernetes/dashboard url, there is another way using helm on this link https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard

after installing helm and run this 2 commands

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/

helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard

it worked but on default namespace on this link http://localhost:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy/#/workloads?namespace=default

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 user10520276
Solution 2 ayman.mostafa
Solution 3
Solution 4 ali ahmed