'kubectl : Unable to connect to the server : dial tcp 192.168.214.136:6443: connect: no route to host
I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command kubectl get pod -n <namspaceName>
:
userX@ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
userX@ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
kubectl cluster-info
as well as other related commands gives same output.
in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.
We also tried to add below entry in /etc/hosts , it is not working.
127.0.0.1 localhost \n
192.168.214.136 localhost \n
127.0.1.1 ubuntu
I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.
Solution 1:[1]
If you use minikube sometimes everything you need to do is just to restart minikube.
Run:
minikube start
Solution 2:[2]
I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.
Below are the steps I took to fix it.
1 ) Check IP of api-server.
This can be verified via the .kube/config
file (under server field) or with:kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system
.
2 ) Run curl https://<kube-apiserver-IP>:6443
and see if port 6443
is open.
3 ) If port 6443
you should get something related to the certificate like:
curl: (60) SSL certificate problem: unable to get local issuer certificate
4 ) If port 6443
is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent
(I'm assuming firewalld
is installed).
4.C ) Run sudo firewall-cmd --reload
.
4.D ) Run sudo firewall-cmd --list-all
and you should see port 6443 is updated:
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client ssh
ports: 6443/tcp <---- Here
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Solution 3:[3]
The common practice is to copy config file to the home directory
sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also, make sure that api-server address is valid.
server: https://<master-node-ip>:6443
If not, you can manually edit it using any text editor.
Solution 4:[4]
You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable
export kubeconfig=<path>/admin.conf
after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.
Solution 5:[5]
Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes
Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.
The way I solved it (bruteforce solution) is by
- completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
- wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
- reinstall
This is what solved the problem in my case.
There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).
Solution 6:[6]
if you getting the below error then you also check once the token validity.
Unable to connect to the server: dial tcp 192.168.93.10:6443: connect: no route to host
Check your token validity by using the command kubeadm token list
if your token is expired then you have to reset the cluster using kubeadm reset
and than initialize again using command kubeadm init --token-ttl 0
.
Then again check the status of the token using kubeadm token list
. Note here the TTL value will be <forever>
and Expire value will be <never>
.
example:-
[root@master1 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
nh48tb.d79ysdsaj8bchms9 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | vitaliis |
Solution 2 | RtmY |
Solution 3 | A_Suh |
Solution 4 | Navneet Nandan Jha |
Solution 5 | Niels Basjes |
Solution 6 | desertnaut |