"Uneven" loadbalancing of forwardAuth middleware requests

Hello,

I am running Traefik on an K8s Cluster. For my ingress route, I have configured a forwardAuth middleware that points to my authentication service that is running inside the K8s cluster as well.
The address configured is the K8s-internal adress (domain ending with svc.cluster.local). The authentication service is configured as ClusterIP. This works fine as configured, every incoming request is being properly authenticated.

I decided to do some loadtests and configured an autoscaler for the authentication service (not for Treafik itself). It should scale up to three pods with a CPU treshold of 200 percent.
What I saw when putting the system under some load (~500 request/sec) is that the authentication requests from Traefik are only being routed to one pod, while the other two are sitting idle.

To illustate, here's the output of kubectl top (metrics server is enabled on our cluster):

authentication-service-7d6446cd89-7w9g6   4m           103Mi
authentication-service-7d6446cd89-lns5n   4m           101Mi
authentication-service-7d6446cd89-srxx8   2671m        128Mi

At first, I suspected there maybe was something funky going on with the autoscaler. So instead of configuring an autoscaler, I scaled the authentication service to three pods statically (set replicas to 3 in its deployment config).
Still, the requests were only being routed to just one pod.
I then suspected that maybe K8s was not doing the loadbalancing (or round robin) properly. I performed some HTTP request from within the cluster using the K8s-internal address that is configured in the middleware. And those requests were properly distributed between the three pods

So my questions are:

  1. Is Traefik maybe doing some service discovery and resolving the adress configured in the middleware to the pods itself instead of just the cluster IP?
  2. Is the some "session stickyness" (for lack of a better description) going on between Traefik and the authentication service pod that could explain the uneven loadbalancing?
  3. Is there an configuration option I'm not seeing that drives this behaviour?

Any insight is appreciated.

Version info:
Traefik 2.2.10
Kubernetes 1.19.9

Ingress route config:

kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
  annotations:
    kubernetes.io/ingress.class: traefik-external
  name: service-ingress
  namespace: user-management
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: >-
        Host(`service.mydomain.com`) && PathPrefix(`/app-service/api`)
      middlewares:
        - name: header-middleware-default
          namespace: user-management
        - name: auth-middleware
          namespace: user-management
        - name: app-service-rewrite
          namespace: user-management
      services:
        - kind: Service
          name: app-service
          namespace: user-management
          port: 80
  tls: {}

Auth middle config:

kind: Middleware
apiVersion: traefik.containo.us/v1alpha1
metadata:
  name: auth-middleware
  namespace: user-management
spec:
  forwardAuth:
    address:
      https://authentication-service.user-management.svc.cluster.local/api/v1.0/openid/authenticate
    authResponseHeaders:
      - Authorization
      - X-TeamArea
      - X-Team-Area-Version
    tls:
      insecureSkipVerify: true
    trustForwardHeader: true

Service definition of the authentication service:

kind: Service
apiVersion: v1
metadata:
  name: authentication-service
  namespace: user-management
spec:
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: http
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443
  type: ClusterIP
  selector:
    app.kubernetes.io/instance: user-management
    app.kubernetes.io/name: authentication-service

Hi,
Please share your use case as I am facing the same issue
Thanks
Viral