Installing with Helm - 404 on dashboard

I have installed Traefik 2.1.1 on an AWS EKS cluster using the latest version of the Helm chart. AFAICT things are running one way or another. A classic ELB was created on AWS in three AZs, and the pod seems to have started just fine.

This is what the pod looks like

Name:           traefik-6d7859ff8d-k2v4p
Namespace:      acme
Priority:       0
Start Time:     Thu, 09 Jan 2020 10:51:33 +0100
Labels:         app=traefik
Annotations: eks.privileged
Status:         Running
Controlled By:  ReplicaSet/traefik-6d7859ff8d
    Container ID:  docker://7d5ef0c75b836ea03b090da455bdef33bf84f4cf86b5ba2451b4abe106047281
    Image:         traefik:2.1.1
    Image ID:      docker-pullable://traefik@sha256:a87b61f3254d03c4fcc0b994e2cb7af89abba8178f1fbec3bce3f4bdc080f8a6
    Ports:         9000/TCP, 8000/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Thu, 09 Jan 2020 10:51:34 +0100
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:9000/ping delay=10s timeout=2s period=10s #success=1 #failure=3
    Readiness:      http-get http://:9000/ping delay=10s timeout=2s period=10s #success=1 #failure=1
    Environment:    <none>
      /var/run/secrets/ from traefik-token-zl6kj (ro)
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
    Type:        Secret (a volume populated by a Secret)
    SecretName:  traefik-token-zl6kj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations: for 300s
        for 300s
  Type    Reason     Age   From                                                   Message
  ----    ------     ----  ----                                                   -------
  Normal  Scheduled  22m   default-scheduler                                      Successfully assigned mercury/traefik-6d7859ff8d-k2v4p to
  Normal  Pulled     22m   kubelet,  Container image "traefik:2.1.1" already present on machine
  Normal  Created    22m   kubelet,  Created container
  Normal  Started    22m   kubelet,  Started container

This is the service:

Name:                     traefik
Namespace:           acme
Labels:                   app=traefik
Selector:                 app=traefik,release=traefik
Type:                     LoadBalancer
LoadBalancer Ingress:
Port:                     web  80/TCP
TargetPort:               web/TCP
NodePort:                 web  30916/TCP
Port:                     websecure  443/TCP
TargetPort:               websecure/TCP
NodePort:                 websecure  30998/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

I am now trying to see the dashboard as a first next step. Here is what I have tried so far:

  1. Use kubectl port-forward svc/traefik -n mynamespace 8080:443 to forward traffic to Traefik.
  2. Open https://localhost:8080/dashboard/ in a browser. I confirm that I want to proceed despite certificate issues and then get a 404.

Alternatively I tried

  1. kubectl port-forward svc/traefik -n mercury 8080:80
  2. http://localhost:8080/dashboard/

I see this in my logs:

time="2020-01-09T09:53:31Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes

I have not made any changes other than change the loglevel to DEBUG.

How do I get this to work? I found some questions describing similar problems with solutions that contained a lot of configuration passed through labels to the service in docker compose files (e.g. Dashboard just not working). Am I wrong in assuming that the Helm chart should work out of the box? What steps do I need to take to get this up and running?

Any help is greatly appreciated.

Just noticed an error in my post (not the actual code): the namespaces are incongruent. In real life they were not.


Something like that happens to me, how did you solve it?


I did not. The lack of responses on this community actually caused me to give up on Traefik for the time being.

Hey I got the same problem, but I've managed to solve it. Maybe it's related to your issue ?

When creating the dashboard ingress, the ingress.class was missing. I found it by directly call the /api endpoint from within the cluster to the ClusterIP of the dashboard ui.
The return was: {"kubernetes": {}}

Easy fix was to add this annotation in my values.yaml:

    annotations: traefik-internal

  ingressClass: "traefik-internal"

Note: traefik-internal is the default name you do not need the kubernetes block I've just put it there for clarity

Hope it solve your issue !