Getting real client IP (X-FORWARDED-FOR) in k3s multi-server HA setup

Hi everyone,

I've created myself the most standard K3s cluster with 3 cloud, public servers (all in "master" mode) with Traefik. I've installed cert-manager and I'm using LetsEncrypt generated wildcard SSL cert for HTTPS. I've also created a wildcard DNS record, which has all 3 public IPs in it (my aim is a proper, resilient HA cluster). There's nothing in front of the servers/traefik, no other loadbalancer, the traffic hits one of the servers directly. Now I'm having a problem with getting the real client IP in my pods (I'm using nginx inside of one of them to debug this). I'm looking at the standard X-FORWARDED-FOR and X-REAL-IP headers. I'm getting the client IP only in case the request hits the server where the Traefik pod actually runs. If it hits any of those two other servers, I get their public IP instead of the client's. I've been through google there and back and I've tried pretty much everything, including:

externalTrafficPolicy: Local

and reconfigured Traefik with:

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    hostNetwork: true

    additionalArguments:
      - "--entryPoints.web.proxyProtocol.insecure"
      - "--entryPoints.web.proxyProtocol.trustedIPs=127.0.0.1/32,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,IP1,IP2,IP3"
      - "--entryPoints.web.forwardedHeaders.insecure"
      - "--entrypoints.web.forwardedHeaders.trustedIPs=127.0.0.1/32,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,IP1,IP2,IP3"
      - "--entryPoints.websecure.proxyProtocol.insecure"
      - "--entryPoints.websecure.proxyProtocol.trustedIPs=127.0.0.1/32,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,IP1,IP2,IP3"
      - "--entryPoints.websecure.forwardedHeaders.insecure"
      - "--entrypoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,IP1,IP2,IP3"

Truth to be told I'm at the end of my options. I do not understand why Traefik behaves this way. The only solution I could come up with (which I haven't test though) is to force Traefik to run on all 3 servers, but I feel like that's an anti-pattern and it shouldn't work that way. At this point, I'm not even sure if it's bad configuration or a bug. Any advice is much appreciated.

Thank you, Jan.

1 Like

Hey there,
I'm currently trying to solve the exact same problem.
Did you find a solution for this?

My setup hast 4 nodes (should not matter if 3 or 4)
and a kennethreitz/httpbin to diagnose (nice swagger ui)
I do a curl request to on of the nodes not running traefik

My request has the IP of the ServiceLoadBalancer on this node as Origin
X-Forwarded-Server and X-Forwarded-Host headers are set (but i thing that is done by traefik it self)

I did a package capture on the node that gets the request

For me it looks like the request is rewritten by the ServiceLoadBalancer and because of this loses his Real IP

Changing externalTrafficPolicy and hostNetwork doesn't change anything on that behavior.

forwardedHeaders and proxyProtocol are to late to affect this. The request is already "altered" and we don't anymore know the "Real" source

Kind regards
MadddinTribleD

Thanks for your interest in Traefik!

@jkotrlik could you explain at bit more what do you mean with

If possible, could you run Traefik in debug mode and post the output?

I went round the houses with this one also...

I landed on

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    ports:
      web:
        forwardedHeaders:
          trustedIPs:
            - 10.0.0.1/8
        proxyProtocol:
          trustedIPs:
            - 10.0.0.1/8
      websecure:
        forwardedHeaders:
          trustedIPs:
            - 10.0.0.1/8
        proxyProtocol:
          trustedIPs:
            - 10.0.0.1/8
    service:
      spec:
        externalTrafficPolicy: Local

Though this also worked:

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    hostNetwork: true
4 Likes

@EarthlingDavey you saved my f***ing day mate, thx!

No sweat! I appreciate the feedback! :v:

you save my life!!! thank you . But do I need to reset it again after upgrading the k3s version?