Context
We’re running Traefik v3.5.3 with 3 replicas on our Kubernetes cluster. Traefik is deployed using the official Traefik Helm chart We’re seeing continuous error messages in the Traefik logs related to updating ingress statuses.
Traefik is the only ingress controller running in the cluster
Problem
Shortly after deployment, Traefik logs start filling up with messages like the following:
time="2025-10-21T11:23:06Z" level=error msg="Error while updating ingress status" error="failed to update ingress status chartmuseum/chartmuseum: Operation cannot be fulfilled on ingresses.networking.k8s.io \"chartmuseum\": the object has been modified; please apply your changes to the latest version and try again" ingress=chartmuseum namespace=chartmuseum providerName=kubernetes
time="2025-10-21T11:23:06Z" level=error msg="Error while updating ingress status" error="failed to update ingress status octopus-deploy/octopus-deploy: Operation cannot be fulfilled on ingresses.networking.k8s.io \"octopus-deploy\": the object has been modified; please apply your changes to the latest version and try again" ingress=octopus-deploy namespace=octopus-deploy providerName=kubernetes
time="2025-10-21T11:23:06Z" level=error msg="Error while updating ingress status" error="failed to update ingress status cattle-system/rancher: Operation cannot be fulfilled on ingresses.networking.k8s.io \"rancher\": the object has been modified; please apply your changes to the latest version and try again" ingress=rancher namespace=cattle-system providerName=kubernetes
This pattern repeats frequently for multiple ingresses and namespaces and spams the logs with these messages.
Traefik seems to work but we notice frequent HTTP 504 Gateway Timeout response codes being returned from applications behind the Traefik Proxy.
Our Setup
- Traefik version
v3.5.3 - Kubernetes version
v1.33.5 - Providers enabled
kubernetesCRDandkubernetesIngress
Helm values
image:
tag: v3.5.3
deployment:
replicas: 3
additionalContainers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.12.2
args: ["-c", "/etc/filebeat.yml", "-e"]
securityContext:
runAsNonRoot: true
runAsUser: 65532
runAsGroup: 65532
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: filebeat-config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: filebeat-data
mountPath: /usr/share/filebeat/data
- name: logs
mountPath: /var/log/traefik
readOnly: true
additionalVolumes:
- name: filebeat-config
configMap:
defaultMode: 0644
name: filebeat-config
- name: filebeat-data
emptyDir: {}
- name: logs
emptyDir: {}
additionalVolumeMounts:
- name: logs
mountPath: /var/log/traefik
providers:
kubernetesCRD:
enabled: true
allowExternalNameServices: true
ports:
web:
nodePort: 30080
websecure:
nodePort: 30443
service:
type: NodePort
additionalArguments:
- "--accesslog.filepath=/var/log/traefik/access.log"
logs:
general:
level: ERROR
access:
enabled: true
Has anyone else experienced this behavior with Traefik 3.x and multiple replicas?
Any insights or workarounds would be appreciated.