We are using Traefik 2.6.2 in Docker Swarm. We have the following configuration for the application running behind Traefik reverse proxy:
version: "3.7"
services:
web:
image: blog:0.0.1
networks:
- traefik-net
deploy:
replicas: 1
labels:
# enable Traefik for this service
- "traefik.enable=false"
- "traefik.docker.network=traefik-net"
# router
- "traefik.http.routers.blog.rule=Host(`blog.company.com`)"
- "traefik.http.routers.blog.entrypoints=web-secure"
- "traefik.http.routers.blog.tls=true"
- "traefik.http.routers.blog.service=blog"
# service
- "traefik.http.services.blog.loadbalancer.server.port=443"
- "traefik.http.services.blog.loadbalancer.server.scheme=https"
- "traefik.http.services.blog.loadbalancer.healthCheck.path=/health"
- "traefik.http.services.blog.loadbalancer.healthCheck.interval=30s"
- "traefik.http.services.blog.loadbalancer.healthCheck.timeout=3s"
networks:
traefik-net:
external: true
name: traefik-net
We also have another low-priority router which listens on the same URL and entrypoint as the original blog
service. It catches the request only if application is not deployed, kind of a fallback to static error page:
http:
routers:
blog-fallback:
rule: "Host(`blog.company.com`)"
priority: 1
middlewares:
- "redirect-to-generic-error-page"
tls: {}
service: "noop@internal"
middlewares:
redirect-to-generic-error-page:
redirectRegex:
regex: "(.*)"
replacement: "https://www.company.com/under-maintenance.html?returnURL=${1}"
permanent: false
Is it possible that the low-priority blog-fallback
router takes precedence over blog
router in case all of the backends are failing Traefik health checks in blog
service?