Can't get komodo working via Traefik

I'm trying to get Komodo working via Traefik - both running in Docker, but until now I get "gateway timeout".

Can someone provide some support or a hint to get this working working? I would appreciate the help.

Komodo is working fine if I access it directly on the Docker host IP or its Docker host fqdn. Komodo via http://192.168.x.a:9120 and http://:9120 is working.

I have traefik active in a macvlan, and assigned it its own IP address 192.168.x.b

In my DNS server I pointed komodo.web.domain.tld to the Traefik IP address 192.168.x.b I see that I get routed to Traefik (as for example the certificate provided via Traefik is recognized).

That seems to point that somehow Traefik is not able to route traffik to the komodo IP. Is that because Traefik is located on the macvlan and Komodo isn't?

If that (incorrect networking) is the reason, how to fix it?

  1. Should I add/use the macvlan subnet for komodo as well (that would be my preference), if I do so Komodo is not working (I assume it can't reach the Mondo DB). How could/should I make this work?

or

  1. Let Docker determine the Komodo IP addresses (172.26.0.4) as it does now and add a reference to the macvlan subnet that I use for Traefik and other services. If I do so, Komodo is again now working.

I'm a little lost what to do to make this work :frowning:

I included some logging and the docker compose file with labels below:

I hope that someone can shine a light on this, so I can make Komodo accessible via Traefik. That would be nice.

Summary
{"level":"debug","time":"2025-11-15T14:57:38+01:00","caller":"github.com/traefik/traefik/v3/pkg/server/service/loadbalan
cer/wrr/wrr.go:176","message":"Service selected by WRR: http://172.26.0.4:9120"}
{"level":"debug","error":"dial tcp 172.26.0.4:9120: i/o timeout","time":"2025-11-15T14:58:08+01:00","caller":"github.com
/traefik/traefik/v3/pkg/proxy/httputil/proxy.go:121","message":"504 Gateway Timeout"}
{"level":"debug","time":"2025-11-15T14:58:09+01:00","caller":"github.com/traefik/traefik/v3/pkg/server/service/loadbalan
cer/wrr/wrr.go:176","message":"Service selected by WRR: http://172.26.0.4:9120"}
{"level":"debug","error":"dial tcp 172.26.0.4:9120: i/o timeout","time":"2025-11-15T14:58:39+01:00","caller":"github.com
/traefik/traefik/v3/pkg/proxy/httputil/proxy.go:121","message":"504 Gateway Timeout"}
Komodo Docker Compose file
#    KOMODO COMPOSE - MONGO    #
################################

## This compose file will deploy:
##   1. MongoDB
##   2. Komodo Core
##   3. Komodo Periphery

## Docker hub URLs:
## https://hub.docker.com/r/moghtech/komodo-core/tags
## https://hub.docker.com/r/moghtech/komodo-periphery
## https://hub.docker.com/_/mongo/tags

services:
  mongo:
    image: mongo
    command: --quiet --wiredTigerCacheSizeGB 0.25
    restart: unless-stopped
    # ports:
    #   - 27017:27017
    volumes:
      - /srv/docker/swarm/stacks/mngt/komodo01/db:/data/db
      - /srv/docker/swarm/stacks/mngt/komodo01/config:/data/configdb
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${KOMODO_DB_USERNAME}
      MONGO_INITDB_ROOT_PASSWORD: ${KOMODO_DB_PASSWORD}
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers
  
  core:
    image: ghcr.io/moghtech/komodo-core:${COMPOSE_KOMODO_IMAGE_TAG:-latest}
    restart: unless-stopped
    depends_on:
      - mongo
    ports:
      - 9120:9120
    env_file: /srv/docker/swarm/stacks/mngt/komodo01/komodo/compose.env
    environment:
      KOMODO_DATABASE_ADDRESS: mongo:27017
      KOMODO_DATABASE_USERNAME: ${KOMODO_DB_USERNAME}
      KOMODO_DATABASE_PASSWORD: ${KOMODO_DB_PASSWORD}
    volumes:
      ## Store dated backups of the database - https://komo.do/docs/setup/backup
      - ${COMPOSE_KOMODO_BACKUPS_PATH}:/backups
      ## Store sync files on server
      # - /path/to/syncs:/syncs
      ## Optionally mount a custom core.config.toml
      # - /path/to/core.config.toml:/config/config.toml
    ## Allows for systemd Periphery connection at 
    ## "https://host.docker.internal:8120"
    # extra_hosts:
    #   - host.docker.internal:host-gateway
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers

      traefik.enable: "true"
      traefik.docker.network: "macvlan01"

      # Regular access zone local/blue
      traefik.http.routers.komodo.entrypoints: "http"
      traefik.http.routers.komodo.rule: "Host(`komodo01.web.domain.tld`)"
      traefik.http.routers.komodo.middlewares: "redirect-http-to-https-permanent@file"

      # Protected zone local/blue
      traefik.http.routers.komodo-secure.entrypoints: "https"
      traefik.http.routers.komodo-secure.rule: "Host(`komodo01.web.domain.tld`)"
      traefik.http.routers.komodo-secure.tls: "true"
      traefik.http.routers.komodo-secure.service: "komodo01"

      # Komodo service
      traefik.http.services.komodo01.loadBalancer.server.port: "9120"


  ## Deploy Periphery container using this block,
  ## or deploy the Periphery binary with systemd using 
  ## https://github.com/moghtech/komodo/tree/main/scripts
  periphery:
    image: ghcr.io/moghtech/komodo-periphery:${COMPOSE_KOMODO_IMAGE_TAG:-latest}
    restart: unless-stopped
    env_file: /srv/docker/swarm/stacks/mngt/komodo01/komodo/compose.env
    volumes:
      ## Mount external docker socket
      - /var/run/docker.sock:/var/run/docker.sock
      ## Allow Periphery to see processes outside of container
      - /proc:/proc
      ## Specify the Periphery agent root directory.
      ## Must be the same inside and outside the container,
      ## or docker will get confused. See https://github.com/moghtech/komodo/discussions/180.
      ## Default: /etc/komodo.
      - ${PERIPHERY_ROOT_DIRECTORY:-/etc/komodo}:${PERIPHERY_ROOT_DIRECTORY:-/etc/komodo}
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers

networks:
  macvlan01:
    external: true

When I put all containers in the "macvlan01" network, I can access komodo via Traefik :slight_smile:

If anyone have a suggestion how I can put the MondoDB container in a Docker network, while keeping the Komodo-core container in the macvlan01 network, I will be glad to hear it.

You can attach a Docker container to multiple networks.

I'll have a look at that. It requires some experimenting to get that work.

Why do you use macvlan anyway? All my services are in Docker. All with Docker network, only Traefik publishes ports.

bluepuma77 thanks for continues your support.

With attaching a network, do you mean a configuration like this (docker config):

networks
    backend:
    frontend:

I think that I use macvlan for multiple reasons. When I started with docker (more intensively earlier this year) I discovered that I should use a DNS. I opted for Pihole. But 1 Pihole is not sufficient, to make it high available so I configured another Pihole on another Pihole. I want the published DNS server IP address to be indepent from the host Pihole is running on. Because of this macvlan comes in handy. At the same time I use keepalived to really have only 1 DNS server IP, while either of the 2 Piholes is active. This works quite well.

I like it that with macvlan all ports are published. No duplicated ports as each container has its own IP with all ports available.

I like to be able to access for example Traefik on its own IP address and not on the IP of the host.

I also would like the various services to be available even when for example, will be moved from 1 host to another. Without macvlan the Traefik IP will change (from host-x to host-y), while with macvlan the assigned IP address will travel with the Traefik.

On the other hand the same may accomplished with 'docker swarm', but I did not configure this. This is to be investigated and studied by me...

I have an (externally managed) load balancer in front of my Traefik servers. Traefik and all target services run on multiple servers running Docker Swarm.

Traefik is pinned to a few nodes the load balancer is trying to use. Only Traefik publishes ports.

Traefik and web services are connected to Docker network "proxy", web services and database cluster are connected to "database".

And I also have an external DNS service.