X-Forwarded-For and Proxy-Protocol

14 October, 2020

Achim Ledermüller
Achim Ledermüller
Senior Manager Cloud

Der Exil Regensburger kam 2012 zu NETWAYS, nachdem er dort sein Wirtschaftsinformatik Studium beendet hatte. In der Managed Services Abteilung ist er für den Betrieb und die Weiterentwicklung unserer Cloud- Plattform verantwortlich.

by | Oct 14, 2020

Want to know how to get the IP addresses of your clients in your Kubernetes cluster? You’ll have an overview in five minutes!

Need more help configuring Kubernetes?
Our MyEngineers® are always there for you!

From HTTP client to application

In the nginx ingress controller tutorial, we show you how to make an application publicly accessible. In the case of the NETWAYS Cloud, your Kubernetes cluster uses an Openstack load balancer, which forwards the client requests to an nginx ingress controller in the Kubernetes cluster. This then distributes all requests to the corresponding pods. With all the shifting and forwarding of requests, the connection details of the clients are lost without further configuration. As the problem has not only arisen since Kubernetes, the tried and tested solutions X-Forwarded-For or Proxy-Protocol are used. In order not to lose the overview in the buzzword bingo between service, load balancer, ingress, proxy, client and application, you can take a look at the following example of the path of an HTTP request from the client to the application through the components of a Kubernetes cluster.

Client IP addresses with X-Forwarded-For

If you use HTTP, the client IP address can be saved in X-Forwarded-For (XFF) and transported further. XFF is an entry in the HTTP header and is supported by most proxy servers. In the example, the load balancer places the client IP address in the XFF entry and forwards the request. All other proxy servers and the applications can thus recognize in the XFF entry from which address the request was originally sent. In Kubernetes, the load balancer is configured via annotations in the service object. If you set loadbalancer.openstack.org/x-forwarded-for: true there, the load balancer is configured accordingly. Of course, it is also important that the next proxy does not overwrite the X-Forwarded-For header again. In the case of nginx, you can set the option use-forwarded-headers in its ConfigMap.

---
# Service
kind: Service
apiVersion: v1
metadata:
  name: loadbalanced-service
  annotations:
    loadbalancer.openstack.org/x-forwarded-for: "true"
spec:
  selector:
    app: echoserver
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
---
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx
data:
  use-forwarded-headers: "true"

As the HTTP header is used, it is not possible to enrich HTTPS connections with the client IP address. Here you must either terminate the TLS/SSL protocol on the load balancer or fall back on the proxy protocol.

Client information with proxy protocol

If you use X-Forwarded-For, you are obviously limited to HTTP. The so-called proxy protocol was invented to enable HTTPS and other applications behind load balancers and proxies to access the client’s connection option. Technically, a small header with the client’s connection information is added by the load balancer. The next hop (in this case nginx) must of course also understand the protocol and handle it accordingly. In addition to classic proxies, other applications such as MariaDB or postfix also support the proxy protocol. To activate the proxy protocol, you must add the annotation loadbalancer.openstack.org/proxy-protocol to the service object. The protocol must also be activated for the receiving proxy.

---
# Service Loadbalancer
kind: Service
apiVersion: v1
metadata:
  name: loadbalanced-service
  annotations:
    loadbalancer.openstack.org/proxy-protocol: "true"
spec:
  selector:
    app: echoserver
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
---
# NGINX ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx
data:
  use-proxy-protocol: "true"

In most cases, however, you will fall back on the Helm chart of the nginx ingress controller. A corresponding configuration is even easier there.

nginx ingress controller and helmet

If you use Helm to install the nginx-ingress-controller, the configuration is very straightforward. The proxy protocol is activated for both nginx and the load balancer via the Helm values file:

nginx-ingress.values:

---
controller:
  config:
    use-proxy-protocol: "true"
  service:
    annotations:
      loadbalancer.openstack.org/proxy-protocol: true
    type: LoadBalancer 

helm install my-ingress stable/nginx-ingress -f nginx-ingress.values

The easiest way to test whether everything works as expected is to use the Google Echoserver. This is a small application that simply returns the HTTP request to the client. As described in the nginx ingress controller tutorial, we need a deployment with service and ingress. The former starts the echo server, the service makes it accessible in the cluster and the ingress configures the nginx so that the requests are forwarded to the deployment.

---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echoserver
spec:
  selector:
    matchLabels:
      app: echoserver
  replicas: 1
  template:
    metadata:
      labels:
        app: echoserver
    spec:
      containers:
      - name: echoserver
        image: gcr.io/google-containers/echoserver:1.8
        ports:
          - containerPort: 8080
---
# Service
apiVersion: v1
kind: Service
metadata:
  name: echoserver-svc
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: echoserver
---
# Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: echoserver-ingress
spec:
  rules:
  - host: echoserver.nws.netways.de
    http:
      paths:
        - backend:
            serviceName: echoserver-svc
            servicePort: 80

For testing purposes, it is best to fake your /etc/hosts so that echoserver.nws.netways.de points to the public IP address of your nginx ingress controller. curl echoserver.nws.netways.de will then show you everything the echo server knows about your client, including the IP address in the X-Forwarded-For header.

Conclusion

In the Kubernetes cluster, the proxy protocol is probably the better choice for most use cases. The well-known Ingress controllers support the proxy protocol and TLS/SSL connections can be configured and terminated in the K8s cluster. The quickest way to find out what information your application receives is to use Google’s Echoserver.

 

Our portfolio

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

How did you like our article?