'Can't access postgres via service from the postgres container itself

I'm trying to verify that my postgres pod is accessible via the service that I've just set up. As of now, I cannot verify this. What I'm able to do is to log into the container running postgres itself, and attempt to talk to the postgres server via the IP of the service. This does not succeed. However, I'm unsure if this is a valid test of whether other pods in the cluster could talk to postgres via the service or if there is a problem with how I'm doing the test, or if there is a fundamental problem in my service or pod configurations.

I'm doing this all on a minikube cluster.

Setup the pod and service:

$> kubectl create -f postgres-pod.yml
$> kubectl create -f postgres-service.yml

postgres-pod.yml

apiVersion: v1
kind: Pod
metadata:
    name: postgres
    labels:
        env: prod
        creation_method: manual
        domain: infrastructure
spec:
    containers:
        - image: postgres:13-alpine
          name: kubia-postgres
          ports:
              - containerPort: 5432
                protocol: TCP
          env:
              - name: POSTGRES_PASSWORD
                value: dave
              - name: POSTGRES_USER
                value: dave
              - name: POSTGRES_DB
                value: tmp
# TODO:
#    volumes:
#        - name: postgres-db-volume


postgres-service.yml

apiVersion: v1
kind: Service
metadata:
    name: postgres-service
spec:
    ports:
        - port: 5432
          targetPort: 5432
    selector:
        name: postgres

Check that the service is up kubectl get services:

kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP    35d
postgres-service   ClusterIP   10.110.159.21   <none>        5432/TCP   71m

Then, log in to the postgres container:

$> kubectl exec --stdin --tty postgres -- /bin/bash

from there, attempt to hit the service's IP:

bash-5.1# psql -U dave -h 10.110.159.21 -p 5432 tmp
psql: error: could not connect to server: Connection refused
    Is the server running on host "10.110.159.21" and accepting
    TCP/IP connections on port 5432?

So using this approach I am not able to connect to the postgres server using the IP of the service.

I'm unsure of several steps in this process:

  1. Is the selecting by name block in the service configuration yaml correct?
  2. Can you access the IP of a service from pods that are "behind" the service?
  3. Is this, in fact, a valid way to verify that the DB server is accessible via the service, or is there some other way?


Solution 1:[1]

You cannot, at least with minikube, access the IP of a service from the pod "behind" that service if there is only one (1) replica.

Solution 2:[2]

Hello, hope you are envoying your Kubernetes journey !

I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:

First I have setup a kind cluster locally with this configuration (info here: https://kind.sigs.k8s.io/docs/user/quick-start/):

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: so-cluster-1
nodes:
- role: control-plane
  image: kindest/node:v1.23.5
- role: control-plane
  image: kindest/node:v1.23.5
- role: control-plane
  image: kindest/node:v1.23.5
- role: worker
  image: kindest/node:v1.23.5
- role: worker
  image: kindest/node:v1.23.5
- role: worker
  image: kindest/node:v1.23.5

after this I created my cluster with this command:

kind create cluster --config=config.yaml

Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):

apiVersion: v1
kind: Namespace
metadata:
  name: so-tests

From there, i got my environment setted up, so I had to deploy a postgres on it, but here is what I've changed:

1- Instead of creating a singleton pod, I created a statefulset (which aim is to deploy databases)

2- I decided to keep using your docker image "postgres:13-alpine" and added a security context to run as the native postgres user (not dave neither root) -- to know what is the id of the postgres user, i first deployed the statefulset without the security context and executed this commands:

? k exec -it postgres-0 -- bash
bash-5.1# whoami
root
bash-5.1# id
uid=0(root) gid=0(root) groups=1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
bash-5.1# id postgres
uid=70(postgres) gid=70(postgres) groups=70(postgres),70(postgres)
bash-5.1# exit

so, once that i knew that the id of the postgres user was 70, I just added this in the statefulset manifest:

securityContext:
runAsUser: 70
fsGroup: 70

3- Instead of adding configuration and secrets as environment variable directly into the pod config of the statefulset, I decide to created a secret and a configmap:

First lets create a kubernetes secret with your password in it, here is the manifest (obtained from this command: "k create secret generic --from-literal password=dave postgres-secret -o yaml --dry-run=client"):

apiVersion: v1
data:
  password: ZGF2ZQ==
kind: Secret
metadata:
  name: postgres-secret

After this i created a configmap to store our postgres config, here is the manifest (obtained by running: kubectl create configmap postgres-config --from-literal user=dave --from-literal db=tmp --dry-run=client -o yaml )

apiVersion: v1
data:
  db: tmp
  user: dave
kind: ConfigMap
metadata:
  name: postgres-config

Since, it is just for a testing purpose, i didnt setted up a dynamic volume provisionning for the statefulset, neither pre-provisionned volume. Instead I have configured a simple emptyDir to store the postgres data (/var/lib/postgresql/data).

N.B.: By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead. (this came from here Create a new volume when pod restart in a statefulset)

Since it is a statefulset, it has to be exposed by a headless kubernetes service (https://kubernetes.io/fr/docs/concepts/services-networking/service/#headless-services)

Here are the manifests:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  serviceName: "postgres"
  replicas: 2
  selector:
    matchLabels:
      env: prod
      domain: infrastructure
  template:
    metadata:
      labels:
        env: prod
        domain: infrastructure
    spec:
      terminationGracePeriodSeconds: 20
      securityContext:
        runAsUser: 70
        fsGroup: 70
      containers:
      - name: kubia-postgres
        image: postgres:13-alpine
        env:
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: password
        - name: POSTGRES_USER
          valueFrom:
            configMapKeyRef:
              name: postgres-config
              key: user
        - name: POSTGRES_DB
          valueFrom:
            configMapKeyRef:
              name: postgres-config
              key: db
        ports:
        - containerPort: 5432
          protocol: TCP
        volumeMounts:
        - name: postgres-test-volume
          mountPath: /var/lib/postgresql
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
      volumes:
      - name: postgres-test-volume
        emptyDir: {}

---

apiVersion: v1
kind: Service
metadata:
  name: postgres-service
  labels:
    env: prod
    domain: infrastructure
spec:
  ports:
  - port: 5432
    protocol: TCP
    targetPort: 5432
    name: pgsql
  clusterIP: None
  selector:
    env: prod
    domain: infrastructure

---

apiVersion: v1
data:
  password: ZGF2ZQ==
kind: Secret
metadata:
  name: postgres-secret

---

apiVersion: v1
data:
  db: tmp
  user: dave
kind: ConfigMap
metadata:
  name: postgres-config
---

I deployed this using:

kubectl apply -f postgres.yaml

I tested to connect into the postgres-0 pod to connect my db with $POSTGRES_USER and $POSTGRES_PASSWORD credentials:

?  k exec -it pod/postgres-0 -- bash
bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.

tmp=#

I listed the databases:

tmp=# \l
                             List of databases
   Name    | Owner | Encoding |  Collate   |   Ctype    | Access privileges
-----------+-------+----------+------------+------------+-------------------
 postgres  | dave  | UTF8     | en_US.utf8 | en_US.utf8 |
 template0 | dave  | UTF8     | en_US.utf8 | en_US.utf8 | =c/dave          +
           |       |          |            |            | dave=CTc/dave
 template1 | dave  | UTF8     | en_US.utf8 | en_US.utf8 | =c/dave          +
           |       |          |            |            | dave=CTc/dave
 tmp       | dave  | UTF8     | en_US.utf8 | en_US.utf8 |
(4 rows)

and I connected to the "tmp" db:

tmp=# \c tmp
Password:
You are now connected to database "tmp" as user "dave".

succesful.

I also tried to connect the database using the IP, as you tried:

bash-5.1$ ip a | grep /24
    inet 10.244.4.8/24 brd 10.244.4.255 scope global eth0
bash-5.1$ psql --username=$POSTGRES_USER -W --host=10.244.4.8 --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.

tmp=#

succesful.

I then downloaded dbeaver (from here https://dbeaver.io/download/ ) to test the access from outside of my cluster:

with a kubectl port-forward:

kubectl port-forward statefulset/postgres 5432:5432

Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432

I created the connection on dbeaver, and could access easily the db "tmp" from localhost:5361 with dave:dave credentials

kubectl port-forward statefulset/postgres 5432:5432

Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432

perfect.

postgresql-statefulset-port-forward-with-dbeaver

same as before (with dbeaver), I tried to connect the db using a port forward, not of the pod, but of the service:

? kubectl port-forward service/postgres-service 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432

It worked as well !

I also created a standalone pod, based on our config to access the db that is in another pod (via the servine name as hostname), here is the manifest of the pod:

apiVersion: v1
kind: Pod
metadata:
  name: postgres
  labels:
    app: test
spec:
  terminationGracePeriodSeconds: 20
  securityContext:
    runAsUser: 70
    fsGroup: 70
  containers:
  - name: kubia-postgres
    image: postgres:13-alpine
    env:
    - name: POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          name: postgres-secret
          key: password
    - name: POSTGRES_USER
      valueFrom:
        configMapKeyRef:
          name: postgres-config
          key: user
    - name: POSTGRES_DB
      valueFrom:
        configMapKeyRef:
          name: postgres-config
          key: db
    ports:
    - containerPort: 5432
      protocol: TCP
    volumeMounts:
    - name: postgres-test-volume
      mountPath: /var/lib/postgresql
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  volumes:
  - name: postgres-test-volume
    emptyDir: {}

here is the result of the connection from inside the podtest:

bash-5.1$ psql --username=$POSTGRES_USER -W --host=postgres-service --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.

tmp=#
  • Here is how you can access it from outside the pod/namespace (make sure that there is no network rules that block the connection):

StatefulSetName-Ordinal.Service.Namespace.svc.cluster.local

i.e: postgres-0.postgres-service.so-tests.svc.cluster.local

Hope this will helped you. Thank you for your question. Bguess

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Dave
Solution 2 bguess