Deploying Traefik via Helm - 404s for Dashboard and API

I'm trying to spin up Traefik on my K8S 1.17 cluster using the Helm chart and enabling the dashboard, but getting consistent 404s when trying to navigate to it in the browser.

I'm utilizing Bare Metal and MetalLB. I can confirm the requests are getting into the traefik pod because I've enabled --accesslogs and see the requests coming in, but still getting 404s when I navigate to all of the following: https://hostname/dashboard/ https://hostname/api/ https://hostname

Here is my overridden Values.yaml:

additionalArguments:
- "--accesslog"
- "--api.insecure"
- "--api=true"
- "--api.dashboard=true"
- "--providers.file.filename=/config/dynamic.toml"

volumes:
- name: home-tls
  mountPath: "/certs"
  type: secret
- name: configs
  mountPath: "/config"
  type: configMap

persistence:
  enabled: false
  accessMode: ReadWriteOnce
  size: 200Mi
  # storageClass: ""
  path: /data
  annotations: {}

And the configMap that gets mounted (only managing my cert):

[[tls.certificates]]
  certFile = "/certs/tls.crt"
  keyFile = "/certs/tls.key"

Here is my ingressRoute:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  annotations:
    helm.sh/hook: post-install,post-upgrade
  creationTimestamp: "2020-04-02T16:32:36Z"
  generation: 1
  labels:
    app: traefik
    chart: traefik-7.1.0
    heritage: Helm
    release: traefik
  name: traefik-dashboard
  namespace: traefik
  resourceVersion: "139937"
  selfLink: /apis/traefik.containo.us/v1alpha1/namespaces/traefik/ingressroutes/traefik-dashboard
  uid: 81a1e2b2-5564-40b4-b478-6af132ed1b29
spec:
  entryPoints:
  - traefik
  routes:
  - kind: Rule
    match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
    services:
    - kind: TraefikService
      name: api@internal

And here is the service that gets created:

apiVersion: v1
kind: Service
metadata:
  annotations:
    field.cattle.io/publicEndpoints: '[{"addresses":["192.168.10.240"],"port":80,"protocol":"TCP","serviceName":"traefik:traefik","allNodes":false},{"addresses":["192.168.10.240"],"port":443,"protocol":"TCP","serviceName":"traefik:traefik","allNodes":false}]'
  creationTimestamp: "2020-04-02T13:59:37Z"
  labels:
    app: traefik
    chart: traefik-7.1.0
    heritage: Helm
    release: traefik
  name: traefik
  namespace: traefik
  resourceVersion: "113042"
  selfLink: /api/v1/namespaces/traefik/services/traefik
  uid: f623c46a-fb83-480a-8f1d-569b2e5c32da
spec:
  clusterIP: 10.43.225.238
  externalTrafficPolicy: Cluster
  ports:
  - name: web
    nodePort: 31728
    port: 80
    protocol: TCP
    targetPort: web
  - name: websecure
    nodePort: 30945
    port: 443
    protocol: TCP
    targetPort: websecure
  selector:
    app: traefik
    release: traefik
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.168.10.240

Actually noticed something different in the pod logs for traefik. There's some process continuously pinging the pod that returns a 200 response:

192.168.10.10 - - [03/Apr/2020:00:23:35 +0000] "GET /ping HTTP/1.1" 200 2 "-" "-" 353 "ping@internal" "-" 0ms

However, when I go to https://hostname/ping in my browser, I'm not seeing the 200 in the logs:

192.168.10.10 - - [03/Apr/2020:00:23:32 +0000] "GET /ping HTTP/2.0" - - "-" "-" 352 "-" "-" 0ms

There's also no 200 for dashboard nor api:

192.168.10.10 - - [03/Apr/2020:00:27:08 +0000] "GET /dashboard/ HTTP/1.1" - - "-" "-" 397 "-" "-" 0ms
192.168.10.10 - - [03/Apr/2020:00:27:08 +0000] "GET /favicon.ico HTTP/1.1" - - "-" "-" 398 "-" "-" 0ms
192.168.10.10 - - [03/Apr/2020:00:27:52 +0000] "GET /api/ HTTP/1.1" - - "-" "-" 414 "-" "-" 0ms

Still at a loss.

Did you figured this one out? I'm running into the same issue.

For me, I have a host rule with pathprefix which fails. If I remove the pathprefix it works (non-ssl).

Now, I'm looking into persistence issue on my end for the certs.