It works very well with Ingress objects (defined in another namespace “preproduction”).
But, if I add in the values.yaml:
deployment:
replicas: 2
to balance the load on 2 pods, I have the following errors in Traefik pods logs:
2025-10-07T13:52:26+02:00 ERR Error while updating ingress status error="failed to update ingress status preproduction/myingress: Operation cannot be fulfilled on ingresses.networking.k8s.io "myingress": the object has been modified; please apply your changes to the latest version and try again" ingress=myingress namespace=preproduction providerName=kubernetes
(and it sometimes succeeds with Updated Ingress status ingress=myingress namespace=preproduction)
However, the Ingress seems to work fine, and my workloads are accessible through Traefik Ingress.
It’s like both pods fight with each other to modify the same Ingress objects.
I’ve tried to manually scale the deployment to 1, wait for the remaining pod to update what it wants to update, then scale it back to 2. Hoping it would help the second pod to start from a clean situation.
It seems to work well for a few seconds, but then the errors come back on both pods. So it does not help
These errors appear once every minute (precisely) for each namespace where there is an ingress. The different namespaces do not have their logs at the same minute.
Example when I scale down Traefik to 1:
2025-10-13T13:42:01+02:00 INF Updated Ingress status ingress=myingress1 namespace=preproduction
2025-10-13T13:42:01+02:00 INF Updated Ingress status ingress=myingress2 namespace=preproduction
2025-10-13T13:42:09+02:00 INF Updated Ingress status ingress=myingress1 namespace=othernamespace
2025-10-13T13:42:09+02:00 INF Updated Ingress status ingress=myingress2 namespace=othernamespace
Why does Traefik want to update my Ingresses every minute?
Note that there is another Traefik instance in the cluster, but it’s configured to handle only Ingresses inside its own namespace (“monitoring”):
I have another ingress implementation (nginx) in the cluster. When I scale it down to zero, the symptom disappears.
So I suppose this nginx ingress updates Ingress objects that are configured with ingressClass traefik.
Why do I have 2 ingress implementations? Because I’m in the middle of a transition from nginx to traefik for the Ingress. Keeping both of them temporarily allowed a quick rollback to nginx in case of…
So I simply need to get rid of nginx ingress to solve that
NB: in another context, it would certainly be possible to configure nginx ingress to tell it not to handle Ingress objects that are not with its ingressClass