'GKE Ingress configuration for HTTPS-enabled Applications leads to failed_to_connect_to_backend

I have serious problems with the configuration of Ingress on a Google Kubernetes Engine cluster for an application which expects traffic over TLS. I have configured a FrontendConfig, a BackendConfig and defined the proper annotations in the Service and Ingress YAML structures.

The Google Cloud Console reports that the backend is healthy, but if i connect to the given address, it returns 502 and in the Ingress logs appears a failed_to_connect_to_backend error.

So are my configurations:

FrontendConfig.yaml:

apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: my-frontendconfig
  namespace: my-namespace
spec:
  redirectToHttps:
    enabled: false
  sslPolicy: my-ssl-policy

BackendConfig.yaml:

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: my-backendconfig
  namespace: my-namespace
spec:
  sessionAffinity:
    affinityType: "CLIENT_IP"
  logging:
    enable: true
    sampleRate: 1.0
  healthCheck:
    checkIntervalSec: 60
    timeoutSec: 5
    healthyThreshold: 3
    unhealthyThreshold: 5
    type: HTTP
    requestPath: /health
    # The containerPort of the application in Deployment.yaml (also for liveness and readyness Probes)
    port: 8001

Ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  namespace: my-namespace
  annotations:
    # If the class annotation is not specified it defaults to "gce".
    kubernetes.io/ingress.class: "gce"
    # Frontend Configuration Name
    networking.gke.io/v1beta1.FrontendConfig: "my-frontendconfig"
    # Static IP Address Rule Name (gcloud compute addresses create epa2-ingress --global)
    kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
spec:
  tls:
  - secretName: my-secret
  defaultBackend:
    service:
      name: my-service
      port:
        number: 443

Service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: my-namespace
annotations:
  # Specify the type of traffic accepted
  cloud.google.com/app-protocols: '{"service-port":"HTTPS"}'
  # Specify the BackendConfig to be used for the exposed ports
  cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
  # Enables the Cloud Native Load Balancer
  cloud.google.com/neg: '{"ingress": true}'
spec:
  type: ClusterIP
  selector:
    app: my-application
  ports:
    - protocol: TCP
      name: service-port
      port: 443
      targetPort: app-port # this port expects TLS traffic, no http plain connections

The Deployment.yaml is omitted for brevity, but it defines a liveness and readiness Probe on another port, the one defined in the BackendConfig.yaml.

The interesting thing is, if I expose through the Service.yaml also this healthcheck port (mapped to port 80) and I point the default Backend to port 80 and simply define a rule with a path /* leading to port 443, everything seems to work just fine, but I don't want to expose the healthcheck port outside my cluster, since I have also some diagnostics information there.

Question: How can I be sure that if i connect to the Ingress point with ``https://MY_INGRESS_IP/`, the traffic is routed exactly as it is to the HTTPS port of the service/application, without getting the 502 error? Where do I fail to configure the Ingress?



Solution 1:[1]

There are few elements to your question, i'll try to answer them here.

I don't want to expose the healthcheck port outside my cluster The HealtCheck endpoint is technically not exposed outside the cluster, it's expose inside Google Backbone so that the the Google LoadBalancers (configured via Ingress) can reach it. You can try that by doing a curl against https://INGREE_IP/healthz, this will not work.

The traffic is routed exactly as it is to the HTTPS port of the service/application The reason why 443 in your Service Definition doesn't work but 80 does, its because when you expose the Service on port 443, the LoadBalancer will fail to connect to a backend without a proper certificate, your backend should also be configured to present a certificate to the Loadbalancer to encrypt traffic. The secretName configured at the Ingress is the certificate used by the clients to connect to the LoadBalancer. Google HTTP LoadBalancer terminate the SSL certificate and initiate a new connection to the backend using whatever port you specific in the Ingress. If that port is 443 but the backend is not configured with SSL certificates, that connection will fail.

Overall you don't need to encrypt traffic between LoadBalancers and backends, it's doable but not needed as Google encrypt that traffic at the network level anyway

Solution 2:[2]

Actually i solved it by setting a managed certificate connected to Ingress. It "magically" worked without any other change, using Service of type ClusterIP

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 boredabdel
Solution 2 madduci