I'm running two Traefik instances outside a K3s cluster for high availability, using Keepalived to provide a floating VIP.
Architecture:
-
Two external Traefik instances (active/active)
-
Keepalived providing a shared VIP
-
Traefik configured with
providers.kubernetesGateway -
Gateway API used inside Kubernetes
-
ExternalDNS reading Gateway status addresses
This works functionally, but when both Traefik instances have the Kubernetes Gateway provider enabled, I see race conditions where both instances attempt to update the Gateway status simultaneously. This results in frequent status updates and configuration churn; the logs are flooded with errors like the following:
2026-04-26T19:32:12+02:00 WRN Unable to update Gateway status error="failed to update Gateway traefik-system/traefik-gateway status: Operation cannot be fulfilled on gateways.gateway.networking.k8s.io "traefik-gateway": the object has been modified; please apply your changes to the latest version and try again" gateway=traefik-gateway namespace=traefik-system providerName=kubernetesgateway
If I disable the kubernetesGateway provider and only use TCP/TLS passthrough routing, everything becomes stable — but then the Gateway status is never updated, so ExternalDNS cannot determine the Gateway address.
My goal is to keep:
-
HA Traefik outside the cluster (Keepalived VIP)
-
Gateway API inside Kubernetes
-
ExternalDNS integration
-
Active/active edge proxies
Questions:
-
Is running multiple external Traefik instances with
providers.kubernetesGatewaysupported in active/active mode? -
Is there a recommended pattern for external HA Traefik that still allows Gateway status updates? I wasn't able to find much in the documentation about HA argument
-
Would it be valid to enable the Kubernetes provider on only one Traefik instance (active controller) while keeping both instances handling traffic?
-
Are there any best practices for Gateway status handling when the edge proxy runs outside the cluster?
I'm trying to achieve a cloud-like architecture where edge proxies are external to the cluster but still dynamically configured via Gateway API.
Not really Kubernetes-ingress related, but tags are required, and this was the closest tag I could identify