Why is precompression slower than on the fly compression?

TL;DR
I pre‑compressed my Nuxt app’s JS and CSS chunks with gzip and Brotli and serve them from MinIO via Traefik v3.3.5. Benchmarks (browser TTFB/total time and a bash+curl script) show that on‑the‑fly compression is actually faster than serving the pre‑compressed files—even though the latter are smaller. I’ve tweaked Traefik’s responseForwarding.flushInterval without effect. Why would pre‑compression be slower than real‑time compression under this setup?

Hello everyone :slightly_smiling_face:,

Traefik Version: 3.3.5 (as reverse proxy)
Environment: Docker
Backend Storage: MinIO (for serving static assets)

I have a Nuxt application whose CSS and JS chunks live in MinIO. By default I use Traefik’s compress middleware to compress on the fly, but I wondered:

Why compress the same files repeatedly for each request when I could pre‑compress them once?

So I did just that—compressed all JS and CSS files maximally with both gzip and Brotli, and saved the resulting .gz and .br files in MinIO.

To let Traefik serve the correct file based on the client’s Accept-Encoding, I created separate routers and middlewares:

Router rule for brotli:

Host(`${STACK_DOMAIN}`) && PathRegexp(`^/_nuxt/`) && PathRegexp(`\.(js|css)$`) && HeaderRegexp(`Accept-Encoding`,`(?i)br`)`

Middlewares for brotli:

    replace-brotli:
      replacePathRegex:
        regex: "^(.*)\\.(js|css)$"
        replacement: "$1.br.$2"

    set-brotli-header:
      headers:
        customResponseHeaders:
          Content-Encoding: br
          Vary: Accept-Encoding

    serve-brotli:
      chain:
        middlewares:
          - replace-brotli
          - set-brotli-header

(Side Note: I left the original .js/.css extension in $1.br.$2 so Traefik (and MinIO) can correctly infer and set the Content-Type.)

I ran two parallel test environments:

  1. Pre‑compressed (using the .br/.gz files + custom routers/middlewares)
  2. On‑the‑fly (Traefik’s compress middleware, no pre‑compressed files)

Results:

  • Browser tests (Chrome & Firefox): I wrote a small JS helper that logs TTFB and total download times for each asset. Surprisingly, the on‑the‑fly–compressed files consistently loaded faster.
  • Automated tests (bash + curl): Fetching 20 files, 100 times each, the pre‑compressed setup was faster in pure server response time—yet browsers still experienced slower load times overall.

I also discovered the Traefik setting:

loadBalancer:
  responseForwarding:
    flushInterval: -1ms

but reducing flushInterval didn’t change client‐side performance.

My current hypothesis is that under light “test‐system” load, the CPU overhead of real‑time compression is negligible, and Traefik may be streaming content more efficiently than serving static compressed files via MinIO. Perhaps under high load the pre‑compression approach would win out, but I haven’t reproduced production‐scale traffic yet.

Questions:

  • Why might pre‑compressed assets load slower in the browser compared to on‑the‑fly compression?
  • Are there Traefik or MinIO configuration tweaks I’m missing that would optimize static delivery of .br/.gz files?
  • Has anyone observed similar behaviors in their own benchmarks?

Thanks in advance for any insights!

1 Like

Did you share this on Traefik Github?

No, I thought that this is the better place instead of an github issue. It seems, that you have a lot experience in contributing to traefik topics. So if your opinion is to post an issue with this topic, I will do that.

For feedback from the devs, I think Github is the better place.

Hm, I tried but since it is neither a feature request nor a bug report I would wait another week before opening a Github issue.

Posted on GitHub now:

1 Like