'Why a pod can't connect to on-premise network

I'm setting up a Kubernetes engine (cluster-version: "1.11") on GCP with Kubeflow installation script that deploy on "default" network and I setting up a Google VPN Service to on-premise network (10.198.96.0/20)

I try to connect from VMs or Kubernetes nodes from GCP to on-premise network all is fine but from Pods it cant't connect to op-premise network

  • From GKE nodes or other VMs on "default" network (10.140.0.0/20) it can be ping or curl to on-premise hosts
  • From GKE Pods it can't ping or curl to on-premise hosts

I'm looking up network configuration of pods creation is 10.24.0.0/14 and I thinks a CIDR of Pods not overlap with "default" network on GCP (10.140.0.0/20) and On-primise network (10.198.96.0/20)

Why Pods can't connect?



Solution 1:[1]

After googling about IP MASQUERADE and I'm try from this post and It's work!!!

Solution 2:[2]

Apparently your pods are isolated in terms of egress traffic.

If you want to allow all traffic from all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all egress traffic in that namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all
spec:
  podSelector: {}
  egress:
  - {}
  policyTypes:
  - Egress

For more details on how to manage network policies read here

Solution 3:[3]

Check your ip-masquerade-agent and configuration see https://cloud.google.com/kubernetes-engine/docs/concepts/ip-masquerade-agent?hl=en

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Teerapat KHUNPECH
Solution 2 A_Suh
Solution 3 flosk8