Hello,
First of all, thank you for this awesome piece of software, that I have been using trough several years.
I have a problem where I cannot reach the promeutheus endpoint, which is enabled by default on the helm chart.
This is what i see in the logs with kubectl -n traefik-ingress logs -f -l app.kubernetes.io/name=traefik
2024-08-21T23:43:14Z ERR Error while updating ingress status error="failed to update ingress status monitoring/grafana: Ingress.networking.k8s.io \"grafana\" is invalid: status.loadBalancer.ingress[0].ip: Invalid value: \"--entryPoints.metrics.address=:9100/tcp\": must be a valid IP address" ingress=grafana namespace=monitoring providerName=kubernetes
2024-08-21T23:43:17Z ERR Error while updating ingress status error="failed to update ingress status argocd/argocd-server: Ingress.networking.k8s.io \"argocd-server\" is invalid: status.loadBalancer.ingress[0].ip: Invalid value: \"--entryPoints.metrics.address=:9100/tcp\": must be a valid IP address" ingress=argocd-server namespace=argocd providerName=kubernetes
2024-08-21T23:43:17Z ERR Error while updating ingress status error="failed to update ingress status monitoring/grafana: Ingress.networking.k8s.io \"grafana\" is invalid: status.loadBalancer.ingress[0].ip: Invalid value: \"--entryPoints.metrics.address=:9100/tcp\": must be a valid IP address" ingress=grafana namespace=monitoring providerName=kubernetes
2024-08-21T23:43:17Z ERR Error while updating ingress status error="failed to update ingress status argocd/argocd-server: Ingress.networking.k8s.io \"argocd-server\" is invalid: status.loadBalancer.ingress[0].ip: Invalid value: \"--entryPoints.metrics.address=:9100/tcp\": must be a valid IP address" ingress=argocd-server namespace=argocd providerName=kubernetes
2024-08-21T23:43:17Z ERR Error while updating ingress status error="failed to update ingress status monitoring/grafana: Ingress.networking.k8s.io \"grafana\" is invalid: status.loadBalancer.ingress[0].ip: Invalid value: \"--entryPoints.metrics.address=:9100/tcp\": must be a valid IP address" ingress=grafana namespace=monitoring providerName=kubernetes
These are the args on the deployment:
- args:
- --api.insecure=true
- --providers.kubernetesingress.ingressendpoint.ip
- --entryPoints.metrics.address=:9100/tcp
- --entryPoints.traefik.address=:9000/tcp
- --entryPoints.web.address=:8000/tcp
- --entryPoints.websecure.address=:8443/tcp
- --api.dashboard=true
- --ping=true
- --metrics.prometheus=true
- --metrics.prometheus.entrypoint=metrics
- --providers.kubernetescrd
- --providers.kubernetescrd.allowExternalNameServices=true
- --providers.kubernetesingress
- --entryPoints.websecure.http.tls=true
- --log.level=INFO
But when I try to reach the port i get this:
-> % kubectl -n traefik-ingress port-forward deployment/traefik-ingress 9100:9100
Forwarding from 127.0.0.1:9100 -> 9100
Forwarding from [::1]:9100 -> 9100
Handling connection for 9100
E0822 00:49:59.545482 3696122 portforward.go:413] an error occurred forwarding 9100 -> 9100: error forwarding port 9100 to pod a37c926c9625c6a70b417ca48032beda729ebd61460df8d93ecd3b72579d5c72, uid : failed to execute portforward in network namespace "/var/run/netns/cni-e4766662-d438-069d-6d75-36d0ec32f08c": failed to connect to localhost:9100 inside namespace "a37c926c9625c6a70b417ca48032beda729ebd61460df8d93ecd3b72579d5c72", IPv4: dial tcp4 127.0.0.1:9100: connect: connection refused IPv6 dial tcp6: address localhost: no suitable address found
error: lost connection to pod'
Even the dashboard complains that the port is unreachable:
I am using treafik version v3.1.2
and chart version traefik-30.1.0
Here are my values:
-> % helm -n traefik-ingress get values traefik-ingress
USER-SUPPLIED VALUES:
deployment:
autoscaling:
enabled: true
maxReplicas: 5
minReplicas: 2
replicas: 2
globalArguments:
- --api.insecure=true
- --providers.kubernetesingress.ingressendpoint.ip
ingressClass:
enabled: true
fallbackApiVersion: v1
isDefaultClass: true
ingressRoute:
dashboard:
enabled: true
providers:
kubernetesCRD:
allowExternalNameServices: true
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: nlb
serviceAccountAnnotations:
eks.amazonaws.com/role-arn: arn:aws:iam::0000000000:role/eks-lb-controller
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: ingress
How can I further diagnose this ?