Gateway timeout on some ingresses

So as my title states, I have a gateway timeout on some apps.
I configured ingress on many apps including this whoami sample app, argocd, jenkins, grafana, traefik-dashboard itself, etc. and all of them work without any problem. I decided to use this kube-prometheus-stack operator in my kubernetes. But the same configuration for all ingresses doesn't work here and I get "Gateway timeout". I looked into logs and there's not much informations there.

Traefik is in default namespace installed, other apps and their ingress of course in their own namespace!

K8s version: 1.21
Traefik version: 2.5.1

What part might be different from other ingresses?

Hello @mokhos

Do you use Kubernetes Ingress or Kubernetes Ingressroute? What is the default ingress class configured on your cluster?

If you create Ingress rules automatically e.g. using other Helm charts e.g. Kube-Prometheus-stack it will create the Kubernetes Ingress, you need to just make sure that Traefik is the default ingress controller in your Cluster.

See some useful links:

https://doc.traefik.io/traefik/providers/kubernetes-ingress/#ingressclass
https://doc.traefik.io/traefik/providers/kubernetes-ingress/
https://doc.traefik.io/traefik/providers/kubernetes-crd/

Hi @jakubhajek

Thanks for your answer.

I use Kubernetes Ingress and Traefik is basically my default ingress controller. I configure for every ingress like below. This is the ingress configuration for grafana in kube-prometheus-stack helm chart.

ingress:
    ## If true, Grafana Ingress will be created
    ##
    enabled: true
    ingressClassName: traefik # This field is required in order for traefik to pickup ingress!
    ## Annotations for Grafana Ingress
    ##
    annotations: 
        traefik.ingress.kubernetes.io/router.entrypoints: web
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"

    ## Labels to be added to the Ingress
    ##
    labels: {}

    ## Hostnames.
    ## Must be provided if Ingress is enable.
    ##
    # hosts:
    #   - grafana.domain.com
    hosts: 
        - grafana.example.com

    ## Path for grafana ingress
    path: /

    pathType: Prefix

And this is how they appear in traefik dashboard:

But when try to reach them via browser, I get the "Gateway Timeout"!

P.S: This is exactly how I configured my other apps and they work without any problem.
P.S2: I already saw your YouTube video regarding Traefik configuration.

Hey @mokhos

Thanks for giving more details.

Can you check what Kubernetes services (those generated for Grafana, Alertmanager and Prometheus by Kube-Prometheus-Stack Helm Chart) are added to the created Ingress?

You can also consider to add the following feature to your static configuration file:

https://doc.traefik.io/traefik/providers/kubernetes-crd/#allowcrossnamespace

or add it as a CLI if you use the Traefik Official helm Chart.

Seems that the resources from Promo Stack are placed in a dedicated namespace and Traefik is deployed in its own, so setting allowCrossNamespace:true should solve the issue.

Thanks @jakubhajek

If I understand your question correctly, this would be the answer:
This is simply the service for grafana ingress in traefik dashboard.

Regarding that "allowCrossNamespace:true", I'm not using kubernetes-crd, but kubernetes-ingress. anyway, I added that field to make sure that I'm not missing anything. But no luck.

As you said, traefik is installed in default namespace and other apps in their own namespace.

Hi @mokhos

Are there are any information in the log file ?
The setup seems to be correct because there is already router and the service created based on the screenshot you sent.

Hi @jakubhajek

Yes the setup is fine. As I said before, I configured other apps with the same config.
The funny part is that ingress on an independent grafana helm chart (not the grafana within the kube-prometheus-stack), works fine!!

The logs also not giving me much information! That's weird! This is the logs for calling one of apps in the stack (here prometheus):

time="2021-09-13T06:43:59Z" level=debug msg="vulcand/oxy/roundrobin/rr: begin ServeHttp on request" Request="{\"Method\":\"GET\",\"URL\":{\"Scheme\":\"\",\"Opaque\":\"\",\"User\":null,\"Host\":\"\",\"Path\":\"/\",\"RawPath\":\"\",\"ForceQuery\":false,\"RawQuery\":\"\",\"Fragment\":\"\",\"RawFragment\":\"\"},\"Proto\":\"HTTP/1.1\",\"ProtoMajor\":1,\"ProtoMinor\":1,\"Header\":{\"Accept\":[\"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\"],\"Accept-Encoding\":[\"gzip, deflate\"],\"Accept-Language\":[\"en-GB,en;q=0.5\"],\"Cache-Control\":[\"max-age=0\"],\"Connection\":[\"keep-alive\"],\"Upgrade-Insecure-Requests\":[\"1\"],\"User-Agent\":[\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:92.0) Gecko/20100101 Firefox/92.0\"],\"X-Forwarded-Host\":[\"prometheus.example.com\"],\"X-Forwarded-Port\":[\"80\"],\"X-Forwarded-Proto\":[\"http\"],\"X-Forwarded-Server\":[\"traefik-785ccc8f95-pvtg9\"],\"X-Real-Ip\":[\"10.0.0.176\"]},\"ContentLength\":0,\"TransferEncoding\":null,\"Host\":\"prometheus.example.com\",\"Form\":null,\"PostForm\":null,\"MultipartForm\":null,\"Trailer\":null,\"RemoteAddr\":\"10.0.0.176:37229\",\"RequestURI\":\"/\",\"TLS\":null}"
time="2021-09-13T06:43:59Z" level=debug msg="vulcand/oxy/roundrobin/rr: Forwarding this request to URL" Request="{\"Method\":\"GET\",\"URL\":{\"Scheme\":\"\",\"Opaque\":\"\",\"User\":null,\"Host\":\"\",\"Path\":\"/\",\"RawPath\":\"\",\"ForceQuery\":false,\"RawQuery\":\"\",\"Fragment\":\"\",\"RawFragment\":\"\"},\"Proto\":\"HTTP/1.1\",\"ProtoMajor\":1,\"ProtoMinor\":1,\"Header\":{\"Accept\":[\"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\"],\"Accept-Encoding\":[\"gzip, deflate\"],\"Accept-Language\":[\"en-GB,en;q=0.5\"],\"Cache-Control\":[\"max-age=0\"],\"Connection\":[\"keep-alive\"],\"Upgrade-Insecure-Requests\":[\"1\"],\"User-Agent\":[\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:92.0) Gecko/20100101 Firefox/92.0\"],\"X-Forwarded-Host\":[\"prometheus.example.com\"],\"X-Forwarded-Port\":[\"80\"],\"X-Forwarded-Proto\":[\"http\"],\"X-Forwarded-Server\":[\"traefik-785ccc8f95-pvtg9\"],\"X-Real-Ip\":[\"10.0.0.176\"]},\"ContentLength\":0,\"TransferEncoding\":null,\"Host\":\"prometheus.example.com\",\"Form\":null,\"PostForm\":null,\"MultipartForm\":null,\"Trailer\":null,\"RemoteAddr\":\"10.0.0.176:37229\",\"RequestURI\":\"/\",\"TLS\":null}" ForwardURL="http://10.42.2.83:9090"

Hello @mokhos

Can you share your static configuration and the examples of working and no-working IngressRoute resources?

Thank you,

Hi @jakubhajek
Thanks for getting back to me.

I used Traefik official helm chart and these additionalArguments:

  - "--providers.kubernetesingress.ingressclass=traefik"
  - "--providers.kubernetescrd"
  - "--providers.kubernetescrd.allowcrossnamespace"
  - "--log.level=DEBUG"

This is the working IngressRoute for Traefik-dashboard itself:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: traefik-dashboard
spec:
  routes:
  - match: Host(`traefik.example.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
    kind: Rule
    services:
    - name: api@internal
      kind: TraefikService
    middlewares:
      - name: auth
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: auth
spec:
  basicAuth:
    secret: authsecret

---
apiVersion: v1
kind: Secret
metadata:
  name: authsecret
  namespace: default

data:
  users: |2
    xxxxxxxxxxx

This is the working Kubernetes-Ingress for Kibana:

ingress:
  enabled: true
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: kibana.example.com
      paths:
        - path: /
  ingressClassName: traefik

And this part is related to kube-prometheus-stack helm chart:

ingress:
    ## If true, Grafana Ingress will be created
    ##
    enabled: true
    ingressClassName: traefik # This field is required in order for traefik to pickup ingress!
    ## Annotations for Grafana Ingress
    ##
    annotations: 
        traefik.ingress.kubernetes.io/router.entrypoints: web
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"

    ## Labels to be added to the Ingress
    ##
    labels: {}

    ## Hostnames.
    ## Must be provided if Ingress is enable.
    ##
    # hosts:
    #   - grafana.domain.com
    hosts: 
        - grafana.example.com

    ## Path for grafana ingress
    path: /

    pathType: Prefix

I also tried an IngressRoute for grafana based on the official guide like this:


apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: grafana
  namespace: prometheus
spec:
  entryPoints:                      # [1]
    - web
  routes:                           # [2]
  - kind: Rule
    match: Host(`grafana.example.com`) || (Host(`grafana.example.com`) && Path(`/`)) # [3]
    priority: 10                    # [4]
    # middlewares:                    # [5]
    # - name: middleware1             # [6]
    #   namespace: default            # [7]
    services:                       # [8]
    - kind: Service
      name: prometheus-grafana
      namespace: prometheus
      passHostHeader: true
      port: 80                      # [9]
      responseForwarding:
        flushInterval: 1ms

As you see, the config part for ingress is the same in all my apps. I believe this is an kube-prometheus-stack helm chart issue.

I wrote an issue in their github repo.

Thanks for sharing your config files.

Can you please check if Prometheus-grafana service exists in the namespace Prometheus? Please note that the service name created by the Helm Chart will have added prefix that is the Release name.

I use Kube-Promethues-stack with Traefik Proxy with IngressRoute with no issues, everything works correctly. I created all Ingressroute routers manually instead of using the Helm chart.

Hi @jakubhajek

Yes, all related services exist in the namespace "prometheus". Please see below:

before replying to you, I tried to remove the chart and install it from the beginning without ingress-config. I did configure the ingress separately as you did, but still no luck!

Can you send some screenshots or your config file to see what you did?

Hey @mokhos

Here are the services created by Kube Promethues Stack in the monitoring namespace:

❯ k get svc
NAME                                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                            ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   103d
kube-prometheus-stack-alertmanager               ClusterIP   10.233.0.169    <none>        9093/TCP                     103d
kube-prometheus-stack-grafana                    ClusterIP   10.233.57.176   <none>        80/TCP                       103d
kube-prometheus-stack-kube-state-metrics         ClusterIP   10.233.21.35    <none>        8080/TCP                     103d
kube-prometheus-stack-operator                   ClusterIP   10.233.33.28    <none>        443/TCP                      103d
kube-prometheus-stack-prometheus                 ClusterIP   10.233.1.156    <none>        9090/TCP                     103d
kube-prometheus-stack-prometheus-node-exporter   ClusterIP   10.233.35.137   <none>        9100/TCP                     103d
prometheus-operated                              ClusterIP   None            <none>        9090/TCP                     103d

Here is the example IngressRoute

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: grafana
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`grafana-prod.example.com`)
      services:
        - kind: Service
          name: kube-prometheus-stack-grafana
          passHostHeader: true
          namespace: monitoring
          port: 80
  tls:
    certResolver: le

I haven't done any special changes to make it work.

Can you check if the DNS records for Grafana points correctly to Traefik instance? Maybe the request is not able to reach Traefik because of that.

No, I don't have DNS problem as well. :sweat_smile:

All apps are pointing correclty to my Traefik instance.

Hi @mokhos

Just wondering if you have any updates concerning that topic.

Hi @jakubhajek

no unfortunately, I'm stucked for now.
Decided to work on something else for a while and then go back to it again.

I also asked this issue from the kube-prometheus-stack github repo and they had no problem as well!

For some unknown reason, kube-prometheus-stack must be in a namespace other than "prometheus" installed.

I removed it, and installed it in "monitoring" namespace.
Now everything works fine.

Hi Mokhos,

Thanks for your answer.

However, I am not able to replicate it and I can't confirm that as the solution. The problem probably is somewhere else strictly related to the environment where Traefik and Promo Stack with all its components have been deployed.

I've just created a test environment and deployed Traefik with allowCrossNameSpace flag set to true and Kube-Prometheus-Stack in Prometheus namespace. I created Ingressroute pointing to the Grafana service and I did not experience the issue you described.

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: grafana
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`grafana.waw.demo.traefiklabs.tech`)
      services:
        - kind: Service
          name: promo-grafana
          passHostHeader: true
          namespace: prometheus
          port: 80
  tls: {}

Anyway, I am glad that you solved your problem but would love really know where is the root cause.

Thank you and have a good day :slight_smile:
Jakub,

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.