The WatchTree Channel is Closed Error after Implementing Redis Server Provider Authentication

We have a traefik proxy pod that should connect to our redis instance pods in the same kubernetes cluster. However, after we've implemented redis auth, we are getting the following looping error in our traefik proxy that causes our configuration to reload:

>>>2024-11-14T13:50:51Z DBG github.com/traefik/traefik/v3/pkg/provider/kv/storewrapper.go:60 > WatchTree: traefik
>>>2024-11-14T13:50:51Z DBG github.com/traefik/traefik/v3/pkg/provider/kv/storewrapper.go:78 > List: traefik
>>>2024-11-14T13:50:51Z ERR github.com/traefik/traefik/v3/pkg/provider/kv/kv.go:127 > Provider error, retrying in 5.062710016s error="the WatchTree channel is closed" providerName=redis

Our traefik.toml looks like the following:

# enable the api
    [api]
    [accessLog]
    [ping]
    entrypoint = "ping"
    # the public port where traefik accepts http requests
    [entryPoints.http]
    address = ":{{ $.Values.proxy.service.internalPort }}"
    # the port on localhost where the traefik api should be found
    [entryPoints.auth_api]
    address = ":{{ $.Values.proxy.traefik.api.internalPort }}"
    [entryPoints.ping]
    address = ":{{ $.Values.proxy.traefik.probeInternalPort }}"
    [log]
    level = "DEBUG"

    [providers.redis]
    username = "<username>"
    password = "<pwd>"
    # the Redis address
    endpoints = ["nbt-redis-sentinel.{{ $.Release.Namespace }}.local-cluster.local-dc.com:{{ $.Values.proxy.redis.sentinel.internalPort }}"]
    # the prefix to use for the static configuration
    rootKey = "traefik"

    [providers.redis.sentinel]
    masterName = "redis-ha"

redis version: 7.4.1
traefik version: 3.2.0
We found that traefik with redis works if I disable auth on redis. Anyone know why this might be?

Is this an issue on the redis or traefik side? Has anyone seen it before?

Maybe try at www.reddit.com/r/Traefik/, more people hang out there.

1 Like