Nginx Ingress in GKE – wrong client IP

Nginx Ingress in GKEGoogle Cloud Platform is one of the leading Cloud Providers, with Amazon Web Services and Microsoft Azure. Many people think that AWS is the best choice because is the most mature. But with “the best” cloud provider is like with anything in the IT world – it depends. When we talk about the Kubernetes, the leading container orchestration tool, it seems that Google Cloud Platform is the best option. After all, Kubernetes has been designed and built by Google, and how it’s maintained by the Cloud Native Computing Foundation with Google as the founding member. So it should be a surprise that many people choose Google Kubernetes Engine as that “the best” one. And if you choose Google Kubernetes Engine you probably use Nginx Ingress in GKE.

The problem I want to describe right now can be a serious problem when you will try to apply a whitelisting inside your application. It’s a common practice in case of every API microservice or just some internal middleware application used for communication with the third party systems. Of course, there are many places, where you can limit access to your application, like for example Ingress Controller annotations or some kind of cloud firewalls, but whitelisting on an application level is still in usage. But when you want to strict access to the limited IP addresses, your application must receive header with the proper IP address. Normally, X-Forwarded-For does the trick, but it’s not such simple when you use Nginx Ingress in GKE. Or… It’s simple, but you have to know what to do.


If you:

  • use Nginx Ingress Controller in Google Kubernetes Engine
  • use helm for Nginx Ingress Controller installation
  • want to pass the real client IP to the application

you have to install Nginx Ingress with the following command:

Why? That’s why. 🙂

Nginx Ingress in GKE – wrong Client IP

What’s the problem – an example.

Maybe it would be better to describe the problem with the example. Let’s create a simple deployment with service and expose it through the Nginx Ingress in GKE. I assume that you have Kubernetes up and ready if you want to go through the instructions below, and also Helm/Tiller installed and configured. In this example, it’s the simples configuration in the kube-system namespace and without any security features. Just for testing purposes. 🙂

I found a very easy example to run on Qwiklabs, so we can use it for our needs.

So we need:

  • deployment with some application
  • service for this application
  • Ingress Resource
  • Ingress Controller (Nginx Ingress in GKE)

Let’s create our deployment:

We have our deployment, so now let’s expose that as the service. In our case, we just use the kubectl expose command, so it will create a ClusterIP service for our deployment.

As you can see service is up and running, now it’s time for Nginx Ingress resource. One of the simplest ways is to use helm for Nginx Ingress installation from the official Chart repository. You can find more details about Nginx Ingress on the Charts Repository on Github. In this case, we are using command from the Qwiklabs example.

As you can see, two additional services have been deployed. Ingress Controller by default is created as a LoadBalancer service, but the problem that is being currently described can be faced also when you change the type of the service to NodePort.

Ok, we have an Nginx Ingress Controller in GKE deployed, now we need to create an Ingress Resource. Let’s create a simple Ingress configuration, based on the example from Qwiklabs:

Now we need to apply this configuration to our Kubernetes cluster:

As you see, Ingress Resource is up and ready:

Let’s do some curl magic!

Deployment – exists. Service – exists. Nginx Ingress Controller – exists. Ingress Resource – exists. So everything says that our application should work fine, right? Call!

Yeah, it seems that everything is ok, from the client side at least. But what logs can tell us about client IP? First, we need to detect which pod is our Nginx Ingress Controller. To do that we can use the following command:

Ok, our pod is named hellopapp-nginx-ingress-controller-597bd9cdf4-4z2hw. What its logs can say about client IP?

Boom! Yeah, it’s wrong IP. I mean… Not wrong at all, but it’s not the real client IP address. is the address of one of the Kubernetes nodes. Don’t you believe?

And what is the solution?

Ok, so we know that something is wrong, but what? The answer is very simple – services with Type=LoadBalancer are source NAT’d by default. According to the official documentation:

“As of Kubernetes 1.5, packets sent to Services with Type=LoadBalancer are source NAT’d by default, because all schedulable Kubernetes nodes in the Ready state are eligible for loadbalanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint, replacing the source IP on the packet with the IP of the node (as described in the previous section).”

The solution is also very simple, we just need to set the externalTrafficPolicy to Local on our Nginx Ingress in GKE. Easy. I’ll show you how to do that. But first, we should clean old Ingresses.

Ok, now according to Nginx Ingress Controller Helm Chart there is a parameter responsible for that behavior. Let’s add it and check what will happen.

We can use exactly the same Ingress Resource configuration, there is no need to change anything.

Some curl magic again!

Aaaand the final test!

Ok, so we are sure, that everything from the client side works fine. But what about client IP in the Nginx Ingress logs?

Yeah, it’s different than the IPs of the nodes! But is it the right IP? Curl can help us again, just issue the following command to check your IP address:

Yup, that’s fine. So the problem has been solved!


It’s a common problem, but as you can see, it’s very easy to solve. So if you use Nginx Ingress in GKE, try to remember that configuration. Please remember, that in this post I’ve described only the simplest example, without any SSL implementation or more production configuration (different namespaces for Tiller, securing Tiller, different namespaces for deployments/services and so on). But it doesn’t matter in this case, because as it has been described in the official Kubernetes documentation, externalTrafficPolicy should be set independently from the rest of your configuration.


One Reply to “Nginx Ingress in GKE – wrong client IP”

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.