'Unable to validate Kubernetes cluster using Kops
I am new to Kubernetes. I am using Kops to deploy my Kubernetes application on AWS. I have already registered my domain on AWS and also created a hosted zone and attached it to my default VPC.
Creating my Kubernetes cluster through kops succeeds. However, when I try to validate my cluster using kops validate cluster
, it fails with the following error:
unable to resolve Kubernetes cluster API URL dns: lookup api.ucla.dt-api-k8s.com on 149.142.35.46:53: no such host
I have tried debugging this error but failed. Can you please help me out? I am very frustrated now.
Solution 1:[1]
From what you describe, you created a Private Hosted Zone in Route 53. The validation is probably failing because Kops is trying to access the cluster API from your machine, which is outside the VPC, but private hosted zones only respond to requests coming from within the VPC. Specifically, the hostname api.ucla.dt-api-k8s.com
is where the Kubernetes API lives, and is the means by which you can communicate and issue commands to the cluster from your computer. Private Hosted Zones wouldn't allow you to access this API from the outside world (your computer).
A way to resolve this is to make your hosted zone public. Kops will automatically create a VPC for you (unless configured otherwise), but you can still access the API from your computer.
Solution 2:[2]
I encountered this last night using a kops-based cluster creation script that had worked previously. I thought maybe switching regions would help, but it didn't. This morning it is working again. This feels like an intermittency on the AWS side.
So the answer I'm suggesting is: When this happens, you may need to give it a few hours to resolve itself. In my case, I rebuilt the cluster from scratch after waiting overnight. I don't know whether or not it was necessary to start from scratch -- I hope not.
Solution 3:[3]
I came across this problem with an ubuntu box. What I did was to add the dns record in the hosted zone in route 53 to /etc/hosts.
Solution 4:[4]
This is all I had to run:
kops export kubecfg (cluster name) --admin
This imports the "new" kubeconfig
needed to access the kops cluster.
Solution 5:[5]
Here is how I resolved the issue : Looks like there is a bug with kops library though it shows **Validation failed: unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api ** when u try kops validate cluster post waiting for 10-15 mins. Behind the scene the kubernetes cluster is up ! You can verify same by doing ssh in to master node of your kunernetes cluster as below
- Go to page where u can ec2 instance and your k8's instances running
- copy "Public IPv4 address" of your master k8 node
- post login to ec2 instance on command prompt login to master node as below ssh ubuntu@<<"Public IPv4 address" of your master k8 node>>
- Verify if you can see all node of k8 cluster with below command it should show your master node and worker node listed there kubectl get nodes --all-namespaces
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Grant David Bachman |
Solution 2 | Brent Bradburn |
Solution 3 | cyprian |
Solution 4 | vordimous |
Solution 5 | Ganesh |