Large response latency of Traefik

I am using Traefik with tracing turned on, and I have noticed that for large responses (in this case ~500kB), the latency of response inside Traefik is really large - larger than of my actual application (backend).

In this example, the response spent ~500 ms in Traefik - almost 2x than it took for backend to process the request (300ms). What could be the reason for this? backend application and traefik are on the same machine. With small responses this does not happen.

I am using a docker provider, here are the labels of the backend service

      - "traefik.enable=true"
      - "traefik.http.middlewares.stripprefix.stripprefix.prefixes=/api"
      - "traefik.http.routers.backend.rule=Host(`api.${DOMAIN}`)"
      - "traefik.http.routers.backend.entrypoints=websecure"
      - "traefik.http.routers.backend.tls.certresolver=lets-encrypt"
      - "traefik.http.services.backend.loadbalancer.server.port=8080"
      - "traefik.http.routers.backend.middlewares=cors-backend@docker,stripprefix@docker"
      - "traefik.http.middlewares.cors-backend.headers.addVaryHeader=true"
      - "traefik.http.middlewares.cors-backend.headers.accessControlAllowHeaders=Authorization,Content-Type"
      - "traefik.http.middlewares.cors-backend.headers.accessControlAllowMethods=*"
      - "traefik.http.middlewares.cors-backend.headers.accessControlAllowCredentials=true"
      - "traefik.http.middlewares.cors-backend.headers.accessControlAllowOriginListRegex=.*"

and here is the static configuration

certificatesResolvers:
  lets-encrypt:
    acme:
      caServer: https://acme-v02.api.letsencrypt.org/directory
      email: ***
      storage: /etc/traefik/acme/acme.json
      tlsChallenge: {}


entryPoints:
  web:
    address: :80
    http:
      redirections:
        entryPoint:
          to: websecure
          scheme: https
  websecure:
    address: :443


providers:
  file:
    directory: /etc/traefik/config


  docker:
    exposedByDefault: false


api:
  insecure: true
log:
  level: DEBUG

tracing:
  otlp:
    grpc:
      endpoint: {{ .CollectorEndpoint }}
      insecure: true

ping: {}

Here's an example of a request with a small response payload that does not suffer this issue

If you think this is a bug, then you can create a Github issue. The challenge is to provide a reproducible example for the developers.

I needed a tool to create random http content responses with fixed size and duration, so I created node-byte-server. You can use it to fetch files of defined size (and with defined overall duration):

https://bytes.example.com/whatever?size=512000&time=1
https://bytes.example.com/whatever?size=1024000&time=1