'Why in GKE (Private Cluster) the destination only sees Pod IP instead of Node IP
I'm still learning GKE and I have set up a private cluster.
The problem I have is that when I send traffic outside the cluster from a Pod (usually by sending an HTTP request) to an external system, the external system only sees the IP address from the Pod, I was expecting the external system to see the node IP instead so that the IP range can be whitelisted.
I have also configured a Cloud NAT on my cluster, but I don't understand why the Pod IP is visible by the external system. Could you please provide a valid explanation of why this is happening?
Solution 1:[1]
Is your cluster VPC-native? https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips
It's the only thing coming to my head right now that can explain this behaviour.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Arnau Senserrich |