'k0s kubectl exec and kubectl port-forwarding are broken

I have a simple nginx pod and a k0s cluster setup with the k0s binary. Now i want to connect to that pod, but i get this error:

$ kubectl port-forward frontend-deployment-786ddcb47-p5kkv 7000:80

error: error upgrading connection: error dialing backend: rpc error: code = Unavailable 
desc = connection error: desc = "transport: Error while dialing dial unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: connect: connection refused"

I dont understand why this happens and why it is tries to access /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock which does not exist on my maschine.

Do I have to add my local dev maschine with k0s to the cluster?

Extract from pod describe:

Containers:
  frontend:
    Container ID:   containerd://897a8911cd31c6d58aef4b22da19dc8166cb7de713a7838bc1e486e497e9f1b2
    Image:          nginx:1.16
    Image ID:       docker.io/library/nginx@sha256:d20aa6d1cae56fd17cd458f4807e0de462caf2336f0b70b5eeb69fcaaf30dd9c
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 28 Jan 2021 14:20:58 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m43s  default-scheduler  Successfully assigned remove-me/frontend-deployment-786ddcb47-p5kkv to k0s-worker-2
  Normal  Pulling    3m42s  kubelet            Pulling image "nginx:1.16"
  Normal  Pulled     3m33s  kubelet            Successfully pulled image "nginx:1.16" in 9.702313183s
  Normal  Created    3m32s  kubelet            Created container frontend
  Normal  Started    3m32s  kubelet            Started container frontend

deployment.yml and service.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
  labels:
    app: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: nginx:1.16
        ports:
        - containerPort: 80
----
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
spec:
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80


Solution 1:[1]

Workaround is to just remove the file.

/var/lib/k0s/run/konnectivity-server/konnectivity-server.sock and restart the server.

Currenlty my github issue is still open.

https://github.com/k0sproject/k0s/issues/665

Solution 2:[2]

I had a similar issue where /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock was not getting created.(in my case path was /run/k0s/konnectivity-server/konnectivity-server.sock )

I did 2 changes to my configuration which finally fixed this issue.

I'm still not quite sure what was the root cause but below points may help:

  1. Hostname: The hostnames in my nodes were in uppercase but k0s somehow expected it to be in lower case. We can override the hostname in the configuration file but that still did not fix the konnectivity sock issue so I had to reset all my hostnames on nodes to small case hostnames.

  2. Port numbers for konnectivity: I had overridden the default port numbers of konnectivity to something in 30K range as given below:

  k0s:
version: 1.23.6+k0s.0
config:
  apiVersion: k0s.k0sproject.io/v1beta1
  kind: Cluster
  metadata:
    name: k0s
  spec:
    konnectivity:
      adminPort: 33573
      agentPort: 33574

These changes finally fixed my issue and on my first attempt, I saw the .sock file was getting created but still didn't have the permission. Then I followed the suggestion given above by TecBeast and that fixed the issue permanently.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 TecBeast
Solution 2 Sunil Kpmbl