'GitLab CI in GKE private cluster can't connect to master

So far we have been using GKE public cluster for all our workloads. We have created a second, private cluster (still GKE) with improved security and availability (old one is single zone, new one is regional cluster). We are using Gitlab.com for our code, but using self-hosted Gitlab CI runner in the clusters.

The runner is working fine on the public cluster, all workloads complete successfully. However on the private cluster, all kubectl commands of thr CI fail with Unable to connect to the server: dial tcp <IP>:443: i/o timeout error. The CI configuration has not changed - same base image, still using gcloud SDK with a CI-specific service account to authenticate to the cluster.

Both clusters have master authorized networks enabled and have only our office IPs are set. Master is accessible from a public IP. Authentication is successful, client certificate & basic auth are disabled on both. Cloud NAT is configured, nodes have access to the Internet (can pull container images, Gitlab CI can connect etc).

Am I missing some vital configuration? What else should I be looking at?



Solution 1:[1]

I have found the solution to my problem, but I am not fully sure of the cause.

I used gcloud container clusters get-credentials [CLUSTER_NAME], which gave the master's public endpoint. However that is inaccessible from within the cluster for some reason - so I assume it would require adding the public IP of the NAT (which is not statically provided) to the authorized networks.

I added the --internal-ip flag, which gave the cluster's internal IP address. The CI is now able to connect to the master.

Source: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#internal_ip

tl;dr - gcloud container clusters get-credentials --internal-ip [CLUSTER_NAME]

Solution 2:[2]

if it is gitlab.com, you can whitelist its ip range in Master authorised networks on GKE,

https://docs.gitlab.com/ee/user/gitlab_com/#ip-range

Solution 3:[3]

You can install a GitLab runner inside your GKE private cluster and whenever the pipeline has to run a pod will be spin up from the GitLab runner you have configured and deleted once job is done

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 jj9987
Solution 2 krish
Solution 3 Aditya