Creepy slow response time

The response time for requests for simple redirection on HTTP/HTTPS using redirectregex and redirectscheme middlewares is damn slow. Multiple time outs are reported at the browser while browsing those Host sites, as well as while curling the links.

P.S. Have hosted around 480 routers with Hosts/Path values as testing for bulk redirection scenario.

Sample dynamic 'file' provider configuration -

http:
  routers:
    pratheesh-router0:
      rule: "Host(`pratheesh-chacko.ga`)"
      middlewares:
      - pratheesh-middleware1
      service: "pratheesh-service1"
  middlewares:
    pratheesh-middleware1:
      redirectRegex:
        regex: "^http://pratheesh-chacko.ga/(.*)"
        replacement: "https://google.com/"
        permanent: true
  services:
    pratheesh-service1:
      loadBalancer:
        servers:
        - url: https://google.com/

You can try browsing this host and find the TAT for the request.
Env: Azure Kubernetes Service through Docker Container with 2 Pod serviced under a single deployment
Traefik Dynamic and Static configurations hosted using YAMLs ('file' provider)
These YAMLs containing around 480 routers scenarios are hosted over container using Azure File Share mounted over container as a volume.
Kubernetes service is exposed through a service pointing to an Azure Load Balancer with a public IP.

@vibhujain This is really weird and suspecting it could be a bug possible I gave it a go in a more controlled local environment, here are my results:

Timings in seconds:

With no middleware:
Connect: 0.004274 TTFB: 0.005965 Total time: 0.006092

With redirect-scheme, redirected from http:
Connect: 0.004199 TTFB: 0.014645 Total time: 0.020215

With redirect-regex, redirected from http:
Connect: 0.004205 TTFB: 0.012758 Total time: 0.018608

Measurements were made with this command:

curl -v -L -k -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total} \n" http://whoami.docker.localhost

And this is my configuration file:

http:
  routers:
    my-router-http:
      rule: Host(`whoami.docker.localhost`)
      service: whoami-svc
      entryPoints:
        - web
      #middlewares:
        #- regex-redirect
        #- http-to-https

    my-router-https:
      rule: Host(`whoami.docker.localhost`)
      service: whoami-svc
      entryPoints:
        - websecure
      tls: true      

  services:
   whoami-svc:
      loadBalancer:
        servers:
        - url: http://whoami

  middlewares:
    http-to-https:
      redirectScheme:
        scheme: https
        permanent: true

    regex-redirect:
      redirectRegex:
        regex: "^http://whoami.docker.localhost/(.*)"
        replacement: "https://whoami.docker.localhost/"
        permanent: true

The runs just alternated from the commented out middleware in the router. In this case there is negligible differences between doing a direct redirect scheme or applying the regex redirect. Even from running without any middleware to adding a middleware doesn't adds up that much and in every case it was very far from the 1s mark.

We don't have your timings, other than it feeling as "creepy slow", could you provide some data on how you measured it?

Hi, I just registered exactly for tis topic.

@douglasdtm any chances you can look into this?

I have run:

curl -v -L -k -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total} \n" https://sub.domain.tld

[...]

Connect: 0.031274 TTFB: 4.132768 Total time: 4.132832

After some loadings, it comes back to normal:

curl -v -L -k -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total} \n" https://sub.domain.tld

[...]

Connect: 0.019639 TTFB: 0.030817 Total time: 0.030897

But 4,1s for a cold connection is still "creepy slow" :wink:
Also I want to add:

  1. both servers are on dedicated 2.5Gb/s Networks and close to each-other.
  2. with Nginx there never was such an issue
  3. the config does not handle the request (it is a path, that is not getting handeled and therefore returns 404 page not found)
  4. I use Traefik v3.2.1 with a docker-compose with: middleware for whitelisting
  5. after some minutes waiting and then checking again - same situation: ca. 4s wait.

I really would appreciate any qualified response that could lead to figuring out, what causes this.

Thanks in advance! :slight_smile:

This btw is my docker-compose.yml:

  traefik:
    image: traefik:v3
    container_name: traefik
    hostname: traefik
    network_mode: host
    read_only: true
    volumes:
      - "/etc/timezone:/etc/timezone:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "/var/docker/data/traefik/traefik.yaml:/etc/traefik/traefik.yaml:ro"
      - "/var/docker/data/traefik/conf/:/etc/traefik/conf/:ro"
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "/var/docker/data/traefik/certs/:/etc/traefik/certs/:rw"
    environment:
      TZ: "Europe/Berlin"
      LANG: "de_DE.UTF-8"
    security_opt:
      - no-new-privileges:true
    deploy:
      resources:
        limits:
          memory: 500M
    restart: unless-stopped

Share your full Traefik static and dynamic config.

Have you tested with removed memory limit?


What do you mean by this?

the config does not handle the request

Your routers are all setup with PathPrefix and you request another path? The TLS certs for https already exist?

No of course not!
The limit is 500M and the container uses 40M.
This is a test-container, so no traffic other than mine for those tests, is going through.

No entrypoints match the requested URL, this was a test how traefik handles early exit requests.

My traefik instance handles:

https://sub.domain.tld/shop*

I send a request to:

https://sub.domain.tld/

So also the config would not give you more insign, but let me post it here (censored):

http:

    routers:

        to-test-backend:
            rule: "Host(`sub.domain.tld`) && PathPrefix(`/shop`)"
            entryPoints:
                - "HTTPS"
            middlewares:
                - "test-whitelist"
            tls:
                certResolver: letsencrypt-tls-prod
            service: test-backend

        to-test-www-backend:
            rule: "Host(`www-sub.domain.tld`)"
            entryPoints:
                - "HTTPS"
            middlewares:
                - "test-whitelist"
                - "redirect-from-www"
            tls:
                certResolver: letsencrypt-tls-prod
            service: test-backend


    middlewares:

        test-whitelist:
            IPAllowList:
                sourceRange:
                    - "127.0.0.1/32"
                    - "192.168.1.7"
                    - "123.123.123.123"

        redirect-from-www:
            redirectRegex:
                regex: "^https://www-sub.domain.tld/(.*)"
                replacement: "https://sub.domain.tld/${1}"


    services:

        test-backend:
            loadBalancer:
                servers:
                - url: https://backend.domain.tld

    serversTransports:
        mytransport:
            maxIdleConnsPerHost: 100


tcp:

    serversTransports:
        mytransport:
            dialKeepAlive: 60s

So to make it very clear:

  1. the entrypoint I requsted does not exist!
  2. the request therefore does not need to wait for the service/backend to reply

Why does this still take 4s to respond to for the first request after a 5min cooldown?

Also:

  • the service I am routing to is not on the same server, like mentioned above, it goes to my other server and serves an URL there. So no local docker forwards/resolves etc.
  • the domain/subdomain uses IPv4 and IPv6. While docker traefik container uses network_mode: host.

I probably figured out what it is.
At least aftzer several times testing it now disappeared.

My DNS Setup was like this:

sub.domain.tld =(A)=>    123.123.123.123
sub.domain.tld =(AAAA)=> 2a32:4df5:005f:19be:592c:bea1:dfc9:5cdb

(the IPs are not my actual IPs :wink: )
which are both also my servers IPs.

In the firewall I allowed:

tcp :80  (HTTP)
tcp :443 (HTTPS)
udp :443 (HTTP/3)

Since traefik is using network_mode: host and IPv4, aswell as IPv6.

As soon as I removed the IPv6 from my DNS:

sub.domain.tld =(AAAA)=> 2a32:4df5:005f:19be:592c:bea1:dfc9:5cdb

and made sure that it propagated everything is fast now! Everytime.
I consider this a bug.

Also:

before curl prefered IPv6 (when both has been available), now it has no chance, then go with IPv4 - and immediately it is fast from the first request.

I also noticed, that:

  • when I use IPv6 I get a HTTP/3 connection.
  • when I use IPv4 I just get a HTTP/2 connection.

Edit. This just seems to hold true in the browser (Chrome for me).
When testing with curl and the CLI:

IPv6:

curl -v -L -k --http3-only -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total} \n" --resolve sub.domain.tld:443:2a32:4df5:005f:19be:592c:bea1:dfc9:5cdb https://sub.domain.tld

IPv4:

curl -v -L -k --http3-only -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total} \n" --resolve sub.domain.tld:123.123.123.123 https://sub.domain.tld

Then both come back as HTTP3.