Google Cloud Platform is one of the leading Cloud Providers, with Amazon Web Services and Microsoft Azure. Many people think that AWS is the best choice because is the most mature. But with “the best” cloud provider is like with anything in the IT world – it depends. When we talk about the Kubernetes, the leading container orchestration tool, it seems that Google Cloud Platform is the best option. After all, Kubernetes has been designed and built by Google, and how it’s maintained by the Cloud Native Computing Foundation with Google as the founding member. So it should be a surprise that many people choose Google Kubernetes Engine as that “the best” one. And if you choose Google Kubernetes Engine you probably use Nginx Ingress in GKE.
The problem I want to describe right now can be a serious problem when you will try to apply a whitelisting inside your application. It’s a common practice in case of every API microservice or just some internal middleware application used for communication with the third party systems. Of course, there are many places, where you can limit access to your application, like for example Ingress Controller annotations or some kind of cloud firewalls, but whitelisting on an application level is still in usage. But when you want to strict access to the limited IP addresses, your application must receive header with the proper IP address. Normally, X-Forwarded-For does the trick, but it’s not such simple when you use Nginx Ingress in GKE. Or… It’s simple, but you have to know what to do.
TL;DR
If you:
- use Nginx Ingress Controller in Google Kubernetes Engine
- use helm for Nginx Ingress Controller installation
- want to pass the real client IP to the application
you have to install Nginx Ingress with the following command:
1 |
$ helm install --name hellopapp-nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.service.externalTrafficPolicy=Local |
Why? That’s why. 🙂
Nginx Ingress in GKE – wrong Client IP
What’s the problem – an example.
Maybe it would be better to describe the problem with the example. Let’s create a simple deployment with service and expose it through the Nginx Ingress in GKE. I assume that you have Kubernetes up and ready if you want to go through the instructions below, and also Helm/Tiller installed and configured. In this example, it’s the simples configuration in the kube-system namespace and without any security features. Just for testing purposes. 🙂
I found a very easy example to run on Qwiklabs, so we can use it for our needs.
So we need:
- deployment with some application
- service for this application
- Ingress Resource
- Ingress Controller (Nginx Ingress in GKE)
Let’s create our deployment:
1 2 3 4 5 |
$ kubectl run hello-app --image=gcr.io/google-samples/hello-app:1.0 --port=8080 deployment.apps/hello-app created $ kubectl get pods NAME READY STATUS RESTARTS AGE hello-app-5788f59987-vmbvd 1/1 Running 0 26s |
We have our deployment, so now let’s expose that as the service. In our case, we just use the kubectl expose command, so it will create a ClusterIP service for our deployment.
1 2 3 4 5 |
$ kubectl expose deployment hello-app service/hello-app exposed $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-app ClusterIP 172.20.30.7 <none> 8080/TCP 14s |
As you can see service is up and running, now it’s time for Nginx Ingress resource. One of the simplest ways is to use helm for Nginx Ingress installation from the official Chart repository. You can find more details about Nginx Ingress on the Charts Repository on Github. In this case, we are using command from the Qwiklabs example.
1 2 3 4 5 6 |
$ helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-app ClusterIP 172.20.30.7 <none> 8080/TCP 2m hellopapp-nginx-ingress-controller LoadBalancer 172.20.17.105 35.242.200.22 80:32260/TCP,443:32449/TCP 46s hellopapp-nginx-ingress-default-backend ClusterIP 172.20.23.243 <none> 80/TCP 46s |
As you can see, two additional services have been deployed. Ingress Controller by default is created as a LoadBalancer service, but the problem that is being currently described can be faced also when you change the type of the service to NodePort.
Ok, we have an Nginx Ingress Controller in GKE deployed, now we need to create an Ingress Resource. Let’s create a simple Ingress configuration, based on the example from Qwiklabs:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-resource annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: /hello backend: serviceName: hello-app servicePort: 8080 |
Now we need to apply this configuration to our Kubernetes cluster:
1 2 |
$ kubectl apply -f ingress.yml ingress.extensions/ingress-resource created |
As you see, Ingress Resource is up and ready:
1 2 3 |
$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE ingress-resource * 80 12s |
Let’s do some curl magic!
Deployment – exists. Service – exists. Nginx Ingress Controller – exists. Ingress Resource – exists. So everything says that our application should work fine, right? Call!
1 2 3 4 |
$ curl http://35.242.200.22/hello Hello, world! Version: 1.0.0 Hostname: hello-app-5788f59987-vmbvd |
Yeah, it seems that everything is ok, from the client side at least. But what logs can tell us about client IP? First, we need to detect which pod is our Nginx Ingress Controller. To do that we can use the following command:
1 2 3 4 5 |
$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-app-5788f59987-vmbvd 1/1 Running 0 6m hellopapp-nginx-ingress-controller-597bd9cdf4-4z2hw 1/1 Running 0 3m hellopapp-nginx-ingress-default-backend-7c6cf98cc9-skm4c 1/1 Running 0 3m |
Ok, our pod is named hellopapp-nginx-ingress-controller-597bd9cdf4-4z2hw. What its logs can say about client IP?
1 2 3 |
$ kubectl logs hellopapp-nginx-ingress-controller-597bd9cdf4-4z2hw … 10.3.0.11 - [10.3.0.11] - - [16/Mar/2019:15:57:57 +0000] "GET /hello HTTP/1.1" 200 66 "-" "curl/7.54.0" 82 0.004 [default-hello-app-8080] 172.20.2.24:8080 66 0.004 200 616994ea117bd5186161db268e7b40ac |
Boom! Yeah, it’s wrong IP. I mean… Not wrong at all, but it’s not the real client IP address. 10.3.0.11 is the address of one of the Kubernetes nodes. Don’t you believe?
1 2 3 |
$ gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gke-test-cluster-default-pool-74bc9960-lq2w europe-west3-a n1-standard-4 10.3.0.11 RUNNING |
And what is the solution?
Ok, so we know that something is wrong, but what? The answer is very simple – services with Type=LoadBalancer are source NAT’d by default. According to the official documentation:
“As of Kubernetes 1.5, packets sent to Services with Type=LoadBalancer are source NAT’d by default, because all schedulable Kubernetes nodes in the Ready state are eligible for loadbalanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint, replacing the source IP on the packet with the IP of the node (as described in the previous section).”
The solution is also very simple, we just need to set the externalTrafficPolicy to Local on our Nginx Ingress in GKE. Easy. I’ll show you how to do that. But first, we should clean old Ingresses.
1 2 3 4 |
$ kubectl delete ingress ingress-resource ingress.extensions "ingress-resource" deleted $ helm delete --purge hellopapp-nginx-ingress release "hellopapp-nginx-ingress" deleted |
Ok, now according to Nginx Ingress Controller Helm Chart there is a parameter responsible for that behavior. Let’s add it and check what will happen.
1 2 3 4 5 6 |
$ helm install --name hellopapp-nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.service.externalTrafficPolicy=Local $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-app ClusterIP 172.20.30.7 <none> 8080/TCP 9m hellopapp-nginx-ingress-controller LoadBalancer 172.20.28.95 35.246.244.156 80:32723/TCP,443:30368/TCP 1m hellopapp-nginx-ingress-default-backend ClusterIP 172.20.19.27 <none> 80/TCP 1m |
We can use exactly the same Ingress Resource configuration, there is no need to change anything.
1 2 3 4 5 |
$ kubectl apply -f ingress.yml ingress.extensions/ingress-resource created $ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE ingress-resource * 80 14s |
Some curl magic again!
Aaaand the final test!
1 2 3 4 |
$ curl http://35.246.244.156/hello Hello, world! Version: 1.0.0 Hostname: hello-app-5788f59987-vmbvd |
Ok, so we are sure, that everything from the client side works fine. But what about client IP in the Nginx Ingress logs?
1 2 3 4 5 6 7 8 |
$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-app-5788f59987-vmbvd 1/1 Running 0 13m hellopapp-nginx-ingress-controller-597bd9cdf4-28rnx 1/1 Running 0 3m hellopapp-nginx-ingress-default-backend-7c6cf98cc9-nnl8z 1/1 Running 0 3m $ kubectl logs hellopapp-nginx-ingress-controller-597bd9cdf4-28rnx ... 83.26.175.153 - [83.26.175.153] - - [16/Mar/2019:16:03:54 +0000] "GET /hello HTTP/1.1" 200 66 "-" "curl/7.54.0" 83 0.002 [default-hello-app-8080] 172.20.2.24:8080 66 0.003 200 ca5e39066038cac6f069081f605aadb8 |
Yeah, it’s different than the IPs of the nodes! But is it the right IP? Curl can help us again, just issue the following command to check your IP address:
1 2 3 4 5 |
$ curl ipinfo.io { "ip": "83.26.175.153", … } |
Yup, that’s fine. So the problem has been solved!
Summary
It’s a common problem, but as you can see, it’s very easy to solve. So if you use Nginx Ingress in GKE, try to remember that configuration. Please remember, that in this post I’ve described only the simplest example, without any SSL implementation or more production configuration (different namespaces for Tiller, securing Tiller, different namespaces for deployments/services and so on). But it doesn’t matter in this case, because as it has been described in the official Kubernetes documentation, externalTrafficPolicy should be set independently from the rest of your configuration.
Great article! Helped us out when we needed client IPs to trace an issue.