Traefik show Real Remote IP Address in Logs [rootless podman]

It's approximatively 4 Days I lost so far, trying to get Rootless Podman (similar to Docker) working and showing the Real Remote IP Addresses in Logs.

I read many Posts here as well as other Sources, but they don't appear to be working at all in my Case.

Something is seriously broken in Traefik. I tried also with the latest v3.0.1, nothing. It just will NOT work.

It seems to only work in bridged network mode (i.e. WITHOUT specifying network_mode: {anything} in compose.yml File). Then I get a Remote IP of 10.x.y.z in the Access Log and I can access the Dashboard Correctly.

This is where things stop working:

  • Setting network_mode: host does NOT work (Traefik Dashboard yields 404 / Not Found)
  • Setting network_mode: network_mode: "slirp4netns:port_handler=slirp4netns" does NOT work (Traefik Dashboard yields 404 / Not Found)
  • Setting network_mode: pasta does NOT work (Traefik Dashboard yields 404 / Not Found)

(but in all of these cases the correct Remote IP Address is Logged :slight_smile:)

So it's a Choice between a:

  • Usable Proxy Server with 100% wrong Remote IP Logging
  • 100% Useless Proxy Server with Correct Remote IP Logging

Yeah ... it's not Fun :roll_eyes:.

I tried:

  • Podman 4.9.4 on Debian Bookworm 12 AMD64 with Podman Packages from Debian Test AMD64 (APT Pinning Method)
  • Podman 5.0.3 on Fedora 40 AMD64

In a desperate Effort, I also attempted to run Caddy as an alternative and ... it works. Well kinda, because I still haven't managed to convert all of the Directives, etc. But using the file_server module I could serve a HTML file without Issues, with SSL Certificates and correct IP Logged.

This should in theory match the Traefik Dashboard Configuration, although I think there is NO Forwarding going on in Caddy with this Basic file_server Module (there were no headers like X-Forwarded-* logged in the Access Log, but it's not clear to me whether that is a logging Issue or a non-forwarding Issue).

In the case of traefik, there is an error related to lack of IP Address Detection (service \"dashboard\" error: unable to find the IP address for the container \"/traefik\": the server is ignored).

In case of caddy, podman inspect caddy also yields an empty IP Address, but caddy doesn't seem bothered by it :slightly_smiling_face:.

Any idea what is going on here ? I can post the compose file if there is willingness to help, but I'd like some more productive replies rather than "Podman is not Supported" (as many times occurs over here unfortunately).

1 Like

Try Traefik group on Reddit. Maybe there someone has experience with Podman.

Given the "Moderators" and Content Suppression on Reddit I'm not so optimistic or even thinking if it's even worth trying ...

Anyways, caddy works, traefik doesn't, when using something that is NOT bridged networking.
Therefore something must be wrong with traefik somewhere ...

You can try trusting the forwarded headers.

What would that do, if the error/server log says it cannot find an IP Address for (itself) traefik Container, thus disabling the dashboard router/service :no_mouth:?

It seems like much more fundamental Problem.

In the access log I only see error 404, same as displayed in the Browser when I try to visit the page.

Maybe start with simple Traefik example.

:no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth: :no_mouth:

The issue is that traefik doesn't work in network_mode: {host, slirp4netns, pasta}.

It works fine in bridged network mode, which your example also uses (proxy network).

You can set individual ports to host mode, check simple Traefik Swarm example.

I know, but I tried that already :frowning:. It makes absolutely no difference.

Cannot remember the error log in that case, but the end result was 404 / Not found when trying to visit the dashboard (in network_mode: {host, slirp4netns, pasta}) or Dashboard working fine but Remote IP Address logged was 10.x.y.z (bridged mode).

Essentially it makes no difference.

There could be something related to podman (by default rootless podman uses rootlessport together with slirp4netns which does source NAT, therefore Remote IP Address is shown as 10.x.y.z instead of the real Remote IP Address).

Different is/should be the story with pasta, where the Real IP Address should appear in the Logs.

On the other Hand, there is something definitively related to traefik, since with rootless Podman 5.0.3 and Pasta, traefik refuses to do anything ("NO IP Adress detected") while Caddy could serve the whoami traefik container without Issues (in that case the Remote IP was 172.16.x.x which is the NAT gateway of the Server, NOT Podman, and X-Forwarded-For shows the Real Remote IP Address, which is a Public IP Adress, as I would expect).

But to be Honest I find traefik easier to use (with labels), the YAML syntax of the Dynamic Configuration files is better IMHO, and the labels for Caddy using the plugin docker-proxy-caddy are not on par with traefik. Anything apart using Caddyfile appears to me like some sort of Black Magic. The way to write a reverse_proxy upstreams ... in caddy seems completely reversed compared to simply say xxxxx.loadbalancer.port=8080 or so in traefik.

There are obviously many people struggling with the issue of the real-ip in traeffik in logs and transferring them to the upstreams. I thought, until I read this post that the solution was having traeffik running in the host network, so I'll have to start re-reading the dozens of questions on these forums, most without a solution or clear explanation.
Considering this is a proxy, it should be something pretty well documented and explained in tutorials and is obviously, as far as i know, not.

What's the issue? When Traefik is listening on the IP directly, then you should see the source IP address in the access logs. It is forwarded with http requests in headers X-Forwarded-For and X-Real-Ip.

When you have another component in front of it (like a load balancer), then you need to ensure the already present "forwarded" headers are trusted (doc). For plain TCP you can enable ProxyProtocol.

Your target service will need to use the "forwarded" headers (or ProxyProtocol), not the TCP connection IP, as the connection always comes from Traefik.

1 Like

I wrote an example showing how to run Traefik with rootless Podman and get a the real remote IP address. The trick is to use socket activation. Support for socket activation was added to Traefik 3.1 (released July 2024).

Example 1 has a traefik container that is acting as an HTTP reverse proxy that forwards requests to a whoami container. Both of of the containers are running in the same custom network. I used a Linux computer when trying out the example. Instead of using Docker Compose, I used Podman Quadlets to set this up.

For details, see Example 1:

1 Like

Running Traefik in bridge mode should work and report the correct client IP addresses in logs. The only problems that may prevent this from happening:

  1. you run Traefik behind another reverse proxy (CloudFlare Proxy, Tailscale, CloudFlare Tunnels, Pfsense/Opnsense, others). Traefik will not automatically trust the headers set by the reverse proxy in front, which typically holds the real IP address you are interested in. Headers are called X-Forwarded-For or CF-Connecting-IP, which are ignored if not trusted.
  2. you resolve your domain names (internally via split brain for example) to IPv4 and IPv6 but your docker server/host does not support both. In this case, logs will show the docker gateway IP as it does not support IPv6 and falls back to IPv4 using the gateway IP.

Solution for 1. is quite easy. Define the IPs of the reverse proxy in front as trusted. Can be done at entrypoint level or by using a plugin in Traefik.

Here an example for CF IPs:

entryPoints:
  # Redirect everything from HTTP to HTTPS
  http:
    address: :80
    forwardedHeaders:
      trustedIPs: &trustedIps
        # start of Clouflare public IP list for HTTP requests, remove this if you don't use it; https://www.cloudflare.com/de-de/ips/
        - 103.21.244.0/22
        - 103.22.200.0/22
        - 103.31.4.0/22
        - 104.16.0.0/13
        - 104.24.0.0/14
        - 108.162.192.0/18
        - 131.0.72.0/22
        - 141.101.64.0/18
        - 162.158.0.0/15
        - 172.64.0.0/13
        - 173.245.48.0/20
        - 188.114.96.0/20
        - 190.93.240.0/20
        - 197.234.240.0/22
        - 198.41.128.0/17
        - 2400:cb00::/32
        - 2606:4700::/32
        - 2803:f800::/32
        - 2405:b500::/32
        - 2405:8100::/32
        - 2a06:98c0::/29
        - 2c0f:f248::/32
        # end of Cloudlare public IP list
    http:
      redirections:
        entryPoint:
          to: https
          scheme: https

  # HTTPS endpoint, with domain wildcard
  https:
    address: :443
    forwardedHeaders:
      # reuse list of Cloudflare Trusted IP's above for HTTPS requests
      trustedIPs: *trustedIps
    # enable HTTP3 QUIC via UDP/443
    #http3:
    #  advertisedPort: '443'      
    http:
      tls:
        # Generate a wildcard domain certificate
        certResolver: myresolver
        domains:
          - main: example.com # change this to your proxy domain
            sans:
              - '*.example.com' # change this to your proxy domain
      middlewares:
        - security-headers@file # reference to a dynamic middleware for setting http security headers per default

Solution for 2. is also quite easy. Either disable IPv6 DNS resolving or support both IPv4 and IPv6 on your docker server.

@luckylinux I ran into the same pain. What I ended up doing is adding HAproxy container in front of my traefik instances (in network_mode: host)

  haproxy:
    image: haproxy:alpine
    container_name: haproxy
    network_mode: host
    volumes:
      - ./resources/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro,Z
    restart: always

with haproxy.cfg:

global
    log stdout format raw local0
    maxconn 4096

defaults
    log global
    mode tcp
    option tcplog
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms


# Frontend for traefik-public (port 443)
frontend https_public_front
    bind *:443
    mode tcp
    option tcplog
    default_backend traefik_public_back

# Frontend for traefik-private (port 10443)
frontend https_private_front
    bind *:10443
    mode tcp
    option tcplog
    default_backend traefik_private_back

# Backend for traefik-public (exposed on port 7676)
backend traefik_public_back
    mode tcp
    option tcp-check
    server traefik-public 127.0.0.1:7676 send-proxy-v2 check

# Backend for traefik-private (exposed on port 7575)
backend traefik_private_back
    mode tcp
    option tcp-check
    server traefik-private 127.0.0.1:7575 send-proxy-v2 check

And adding traefik instance IP in trustedIPs of each traefik instance (ideally it should be the actual IP, not the IP range but close enough for me) :

- "-–entrypoints.websecure.forwardedHeaders.trustedIPs=10.0.0.0/8,172.16.0.0/12"`
- "--entrypoints.websecure.proxyProtocol.trustedIPs=10.0.0.0/8,172.16.0.0/12"

With that setup, I can finally see real client IP with minimal changes in my current setup. HAproxy is lightweight anyway and I really could not find any other solution.

I kinda stepped out of this Topic :sweat_smile: .

I’m personally not so fond that I have to add yet another Layer in the “Stack” just to fix a Bug (or “Feature”) in Traefik to be honest.

For 99% of my use Cases a HTTP Reverse Proxy without any load Balancing is sufficient, so for those Cases I definitively prefer caddy :smiley: .

For the other Cases (TCP and/or UDP Proxy), then I have to use traefik (couldn’t really get caddy-layer4 to work so far) and I kinda forgot about the IP Address Logging Part there.

1 Like

I’d have to dig into it (I didn’t notice your Post until now or I forgot about it, sorry).

I still don’t get why Socket Activation is required. What is it so special about Traefik for it to require that ?

What is the Advantage of Socket Activation though ?

To me it looks like an unnecessary Complication at least for my use Case.

From what I know pasta / passt (in Podman 5.x) will happily preserve the Source IP Address and caddy is perfectly happy with it. It’s just traefik refusing to start in that Mode. There might be something related to missing Privileges (I’m running Podman rootless after all) and possibly some kind of “binding”, but again, this seems to be a traefik Issue rather than a pasta / podman one to me.

Caddy just works (which is fine for HTTP-only Proxy without load Balancing).

I feel that Traefik did some unnecessary Checks at startup and, not finding any IP, just gives up.

1 Like

Late Reply but here I go.

There might be something related to Podman Rootless (missing "Binding" or Permission) that caused Traefik to give up (no IP Addresses detected), although as I said, that doesn't bother caddy at all. So for me, that was an unnecessary check.

Regarding Point 1, that's NOT the Case, since I am testing LAN-only. Forget about any upstream Router, Gateway, reverse Proxy, etc :wink:.

Regarding Point 2, I have the feeling that indeed Traefik just uses IPv4 by default.

If you use rootless Podman together with the network driver Pasta and run the container in a custom network, and you publish the port with --publish (-p), then the source IP address of incoming connections is not preserved. There are some plans to fix it. For details, see

To work around the problem, you could use socket activation. Fortunately, socket activation is supported by Traefik.

1 Like

Yeah, and unfortunately we cannot "mix" the pasta Network approach (pod) with the bridge Network approach where we basically put all containers and then just use Traefik Labels with those.

I think using Pasta, it only makes sense to use a separate Traefik Proxy for each Application each Time, same as I do with caddy. But that of course requires different IP Bindings on the Host for the same Port (not an Issue with IPv6 but IPv4 is a different Thing :sweat_smile:)

I gate it a try now and, while it feels quite complex in Terms of Files (I'll probably have to write a Python Wrapper around the whole Thing, just to avoid having to manually create 10 Files or so :rofl:), it works.

The only Thing to keep in Mind is that we need to use multiple Sockets for IPv4 and IPV6 (or for any multiple Address for that Matter), besides also requiring multiple Files per each Port. Hence my Remark above :smiley:.

It works fine, but this was the Issue that shed some Light about it:

I'm NOT sure if FileDescriptorName needs to be unique for a given User or if 2 different Sockets (NOT used for the same traefik Instance or other Container's Instance) can have the same FileDescriptorName.

The Systemd Manual isn't fully clear about it:

Assigns a name to all file descriptors this socket unit encapsulates.

This is useful to help activated services identify specific file descriptors, if multiple fds are passed. Services may use the sd_listen_fds_with_names(3) call to acquire the names configured for the received file descriptors. Names may contain any ASCII character, but must exclude control characters and ":", and must be at most 255 characters in length.

If this setting is not used, the file descriptor name defaults to the name of the socket unit (including its .socket suffix) when Accept=no, "connection" otherwise.

Added in version 227.

Basically you'll have to create multiple entryPoints Entries, one for each Socket.

From:

     # WITHOUT Systemd Socket Activation
     --entryPoints.web.address=:80 \
     --entryPoints.web.http.redirections.entrypoint.to=websecure \
     --entryPoints.web.http.redirections.entrypoint.scheme=https \
     --entryPoints.web.http.redirections.entrypoint.permanent=true \

     # WITHOUT Systemd Socket Activation
     --entryPoints.websecure.address=:443 \
     --entryPoints.websecure.http.tls=true \
     --entryPoints.websecure.transport.respondingTimeouts.readTimeout=420 \
     --entryPoints.websecure.transport.respondingTimeouts.writeTimeout=420 \
     --entryPoints.websecure.transport.respondingTimeouts.idleTimeout=420 \

To:

     # WITH Systemd Socket Activation
     # The name "traefik-web-ipv4" matches
     # the setting "FileDescriptorName=traefik-web-ipv4" in the file https.socket
     # Traefik inherits the sockets from systemd (the parent process).
     --entryPoints.traefik-web-ipv4.address=:80 \
     --entryPoints.traefik-web-ipv4.http.redirections.entrypoint.to=websecure \
     --entryPoints.traefik-web-ipv4.http.redirections.entrypoint.scheme=https \
     --entryPoints.traefik-web-ipv4.http.redirections.entrypoint.permanent=true \

     # WITH Systemd Socket Activation
     # The name "traefik-web-ipv4" matches
     # the setting "FileDescriptorName=traefik-web-ipv4" in the file https.socket
     # Traefik inherits the sockets from systemd (the parent process).
     --entryPoints.traefik-web-ipv6.address=:80 \
     --entryPoints.traefik-web-ipv6.http.redirections.entrypoint.to=websecure \
     --entryPoints.traefik-web-ipv6.http.redirections.entrypoint.scheme=https \
     --entryPoints.traefik-web-ipv6.http.redirections.entrypoint.permanent=true \

     # WITH Systemd Socket Activation
     # The name "traefik-websecure-ipv4" matches
     # the setting "FileDescriptorName=traefik-websecure-ipv4" in the file https.socket
     # Traefik inherits the sockets from systemd (the parent process).
     --entryPoints.traefik-websecure-ipv4.address=:443 \
     --entryPoints.traefik-websecure-ipv4.http.tls=true \
     --entryPoints.traefik-websecure-ipv4.transport.respondingTimeouts.readTimeout=420 \
     --entryPoints.traefik-websecure-ipv4.transport.respondingTimeouts.writeTimeout=420 \
     --entryPoints.traefik-websecure-ipv4.transport.respondingTimeouts.idleTimeout=420 \

     # WITH Systemd Socket Activation
     # The name "traefik-websecure-ipv6" matches
     # the setting "FileDescriptorName=traefik-websecure-ipv6" in the file https.socket
     # Traefik inherits the sockets from systemd (the parent process).
     --entryPoints.traefik-websecure-ipv6.address=:443 \
     --entryPoints.traefik-websecure-ipv6.http.tls=true \
     --entryPoints.traefik-websecure-ipv6.transport.respondingTimeouts.readTimeout=420 \
     --entryPoints.traefik-websecure-ipv6.transport.respondingTimeouts.writeTimeout=420 \
     --entryPoints.traefik-websecure-ipv6.transport.respondingTimeouts.idleTimeout=420 \

Furthermore, in every Container / Application, you'll have to specify multiple entryPoints for e.g. HTTPs, whereas before we only had one.

e.g. from:

# WITHOUT Systemd Socket Activation
Label=traefik.http.routers.myapp-router.entryPoints=web,websecure

to:

# WITH Systemd Socket Activation
Label=traefik.http.routers.myapp-router.entryPoints=traefik-web-ipv4,traefik-web-ipv6,traefik-websecure-ipv4,traefik-websecure-ipv6

These have to match e.g. the FileDescriptorName=traefik-websecure-ipv4 Item in traefik-https-ipv4.socket File.