'did you specify the right host or port? error on Kubernetes
I have followed the helloword tutorial on http://kubernetes.io/docs/hellonode/.
When I run:
kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080
I get:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Why does the command line try to connect to the localhost?
Solution 1:[1]
The issue is that your kubeconfig
is not right.
To auto-generate it run:
gcloud container clusters get-credentials "CLUSTER NAME"
This worked for me.
Solution 2:[2]
Make sure your config is set to the project -
gcloud config set project [PROJECT_ID]
- Run a checklist of the Clusters in the account:
gcloud container clusters list
- Check the output :
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VE. NUM_NODES STATUS
alpha-cluster asia-south1-a 1.9.7-gke.6 35.200.254.78 f1-micro 1.9.7-gke.6 3 RUNNING
- Run the following cmd to fetch credentials for your running cluster:
gcloud container clusters get-credentials your-cluster-name --zone your-zone --project your-project
- The following output follows:
Fetching cluster endpoint and auth data.
kubeconfig entry generated for alpha-cluster.
- Try checking details of the node running
kubectl
such as below to list all pods in the current namespace, with more details:
kubectl get nodes -o wide
Should be good to go.
Solution 3:[3]
I had same error, this worked for me. Run
minikube status
if the response is
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped
run minikube start
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
You can proceed
Solution 4:[4]
After running "kubeinit" command, kubernetes asks you to run following as regular user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
But if you run this as a regular user, you will get "The connection to the server localhost:8080 was refused - did you specify the right host or port?" when trying to access as a root user and vice versa. So try accessing "kubectl" as the user who executed the above commands.
Solution 5:[5]
Reproduce the same error when doing a tutorial from Udacity called Scalable Microservices with Kubernetes https://classroom.udacity.com/courses/ud615, at the point of Using Kubernetes, Part 3 of Lesson.
Launch a Single Instance:
kubectl run nginx --image=nginx:1.10.0
Error:
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
How I resolved the Error:
Login to Google Cloud Platform
Navigate to Container Engine Google Cloud Platform, Container Engine
Click CONNECT on Cluster
Use login Credentials to access Cluster [NAME] in your Teminal
Proceeded With Work!!!
Solution 6:[6]
I was trying to connect with local-host and end up with same problem, then I found, I need to start a proxy to the Kubernetes API server.
kubectl proxy --port=8080
https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/
Solution 7:[7]
I was getting an error when running
sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Finally for my environment this command parameter works
sudo kubectl --kubeconfig /etc/kubernetes/admin.conf get pods
when executing kubectl as non root.
Solution 8:[8]
Happening because your kubectl is not able to connect to kubernetes server. Run your cluster.
minikube start
If you want to access service w.r.t your kube config file, you can access it via
kubectl --kubeconfig ~/.kube/config get jobs
~/.kube/config : Path of config file, modify w.r.t your file path
Solution 9:[9]
This errors means that kubectl
is attempting to connect to a Kubernetes apiserver running on your local machine, which is the default if you haven't configured it to talk to a remote apiserver.
Solution 10:[10]
Reinitialising gcloud with proper account and project worked for me.
gcloud init
After this retrying the below command was successful and kubeconfig entry was generated.
gcloud container clusters get-credentials "cluster_name"
check the cluster info with
kubectl cluster-info
Solution 11:[11]
I had the same issue after a reboot, I followed the guide described here
So try the following:
$ sudo -i
# swapoff -a
# exit
$ strace -eopenat kubectl version
After that it works fine.
Solution 12:[12]
Regardless of your environment (gcloud or not ) , you need to point your kubectl to kubeconfig. By default, kubectl expects the path as $HOME/.kube/config or point your custom path as env variable (for scripting etc ) export KUBECONFIG=/your_kubeconfig_path
Please refer :: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
If you don't have a kubeconfig file for your cluster, create one by referring :: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
It is required to find cluster's ca.crt , apiserver-kubelet-client key and cert.
Solution 13:[13]
I got the same trouble since nearly release, seem must use KUBECONFIG explicit
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Solution 14:[14]
The correct answer, from all above, is to run the commands below:
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Solution 15:[15]
as an improvement to Omokahfe:
minikube status
if the response is
E0623 09:12:24.603405 21127 status.go:396] kubeconfig endpoint: extract IP: "minikube" does not appear in /home/<user>/.kube/config
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Misconfigured
timeToStop: Nonexistent
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `**minikube update-context**`
run
minikube update-context
then it will show
* "minikube" context has been updated to point to 10.254.183.66:8443
* Current context is "minikube"
and then
minikube status
will show
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
timeToStop: Nonexistent
Solution 16:[16]
I had this problem using a local docker. The thing to do is check the logs of the containers its spins up to figure out what went wrong. For me it transpired that etcd had fallen over
$ docker logs <etcdContainerId>
<snip>
2016-06-15 09:02:32.868569 C | etcdmain: listen tcp 127.0.0.1:7001: bind: address already in use
Aha! I'd been playing with Cassandra in a docker container and I'd forwarded all the ports since I wasn't sure which it needed exposed and 7001 is one of its ports. Stopping Cassandra, cleaning up the mess and restarting it fixed things.
Solution 17:[17]
If you created a cluster on AWS using kops, then kops creates ~/.kube/config
for you, which is nice. But if someone else needs to connect to that cluster, then they also need to install kops so that it can create the kubeconfig for you:
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
export CLUSTER_ALIAS=kubernetes-cluster
kubectl config set-context ${CLUSTER_ALIAS} \
--cluster=${CLUSTER_FULL_NAME} \
--user=${CLUSTER_FULL_NAME}
kubectl config use-context ${CLUSTER_ALIAS}
kops export cluster --name ${CLUSTER_FULL_NAME} \
--region=${CLUSTER_REGION} \
--state=${KOPS_STATE_STORE}
Solution 18:[18]
try run with sudo
permission mode
example sudo kubectl....
Solution 19:[19]
In case someone, like myself, came across this thread because of the underlying error in their Cloud Build step while switching from gcr.io/cloud-builders/kubectl
to gcr.io/google.com/cloudsdktool/cloud-sdk
, you would need to explicitly call get-credentials
for kubectl
to work.
My pipeline:
steps:
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: 'sh'
args:
- '-c'
- |
gcloud container clusters get-credentials --zone "$$CLOUDSDK_COMPUTE_ZONE" "$$CLOUDSDK_CONTAINER_CLUSTER"
kubectl call-what-you-need-here
options:
env:
- 'CLOUDSDK_COMPUTE_ZONE=europe-west3-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=my-cluster'
Solution 20:[20]
- activate Docker in your system
- run minikube start command in terminal
Solution 21:[21]
I was also getting same below error:
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
Then I just execute below command and found everything working fine.
PS C:> .\minikube.exe start
Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 150.53 MB / 150.53 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading kubeadm v1.10.0 Downloading kubelet v1.10.0 Finished Downloading kubelet v1.10.0 Finished Downloading kubeadm v1.10.0 Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. PS C:> .\minikube.exe start Starting local Kubernetes v1.10.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster.
Solution 22:[22]
I got this issue when using " Bash on Windows " with azure kubernetes
az aks get-credentials -n <myCluster>-g <myResourceGroup>
The config file is autogenerated and placed in '~/.kube/config' file as per OS (which is windows in my case)
To solve this -
Run from Bash commandline cp <yourWindowsPathToConfigPrintedFromAbobeCommand> ~/.kube/config
Solution 23:[23]
I resolved this issue by removing the incorrect environment variable for the kube config with the command
unset KUBECONFIG
Solution 24:[24]
Fix the Error – The connection to the server localhost:8080 was refused
- Check if the kubeconfig environment variable is exported if not exported
export KUBECONFIG=/etc/kubernetes/admin.conf or $HOME/.kube/config
- Check your .kube or config in the home directory file. If you did not found it, then you need to move that to the home directory. using the following command
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Whenever you are starting Master Node you may require to set the environment variable. Hence it’s a repetitive task for you. It can be set permanently using the following command.
echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc
Solution 25:[25]
If you are using docker desktop, make sure you have Kubernetes enabled:
Go to Preferences > Kubernetes and make sure 'Enable Kubernetes' is checked.
Solution 26:[26]
For me, it was simply a full disk, kubectl also gives this error if your disk is full. Sadly, clearing space does not immediately help, a reboot was necessary for me.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow