Backend has too many servers configured, every other request fails

I have an Ingress configured with a single backend service which points to a single pod:

[...]
spec:
  rules:
    - host: [REDACTED]
      http:
        paths:
          - path: /issue-20-alternatives-a/
            backend:
              serviceName: review-app-issue-20-alternatives-a
              servicePort: 80
# kubectl describe ingress review-app-issue-20-alternatives-a
Name:             review-app-issue-20-alternatives-a
Address:          [REDACTED]
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host                                          Path  Backends
  ----                                          ----  --------
  [REDACTED]
                                                /issue-20-alternatives-a/   review-app-issue-20-alternatives-a:80 (10.42.0.221:80)
[...]
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: review-app-issue-20-alternatives-a
  clusterIP: 10.43.187.63
  type: ClusterIP
  sessionAffinity: None
[...]
# kubectl describe service review-app-issue-20-alternatives-a
[...]
Selector:          app=review-app-issue-20-alternatives-a
Type:              ClusterIP
IP:                10.43.187.63
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.42.0.221:80
Session Affinity:  None
Events:            <none>

Every other request fails with "bad gateway".

I'm guessing it has to do with the fact that in the Traefik Dashboard there are two Servers listed:

image

10.42.0.221:80 is from the Kubernetes Service but I don't know where 10.42.0.222:80 comes from or how to get rid of it.

This is the Helm Chart that comes with k3s, the Dashboard shows as version: V1.7.19 / MAROILLES

The log contains just lots of these:

[...]
{"level":"info","msg":"Server configuration reloaded on :8080","time":"2020-07-07T12:55:31Z"}
{"level":"info","msg":"Server configuration reloaded on :9100","time":"2020-07-07T12:55:31Z"}
{"level":"info","msg":"Server configuration reloaded on :80","time":"2020-07-07T12:55:31Z"}
{"level":"info","msg":"Server configuration reloaded on :443","time":"2020-07-07T12:55:31Z"}
{"level":"info","msg":"Skipping same configuration for provider kubernetes","time":"2020-07-07T12:55:31Z"}
{"level":"info","msg":"Server configuration reloaded on :8080","time":"2020-07-07T12:55:33Z"}
{"level":"info","msg":"Server configuration reloaded on :9100","time":"2020-07-07T12:55:33Z"}
{"level":"info","msg":"Server configuration reloaded on :80","time":"2020-07-07T12:55:33Z"}
{"level":"info","msg":"Server configuration reloaded on :443","time":"2020-07-07T12:55:33Z"}

What I usually do to remedy the situation is to update an annotation in the deployment, which creates a new pod and deletes the old one. After that, usually the backend in Traefik is fixed (contains only one Server).