Request to API in k8s cluster behind Traefik times out after 30 seconds

I have an API (link) to test my Traefik config in my k8s cluster. This API takes a query and sends it back after a delay. This is the command I use to test the API:

curl -X 'POST' 'https://apps.peregimenez.com/corstest' -H 'accept: application/json' -H 'Content-Type: application/json' -H "Origin: https://peregimenez.com" -d '{ "query": "hello there", "sleep":30 }' -i --max-time 60

If the delay is under 30 seconds, it works. However, if it is over 30 seconds, I get a 502 Server Error. The Traefik log shows

 time="2023-11-30T11:42:54Z" level=debug msg="'499 Client Closed Re │
│ quest' caused by: context canceled" 

I've tried setting higher timeouts in the Helm chart, and also setting them to 0 to disable them, but nothing works. Wham am I doing wrong?

This is my chart yaml:

ingressClass:
  enabled: true
  isDefaultClass: false

additionalArguments:
  - "--log.level=DEBUG"
  - "--providers.kubernetesingress.ingressclass=traefik"
  - "--ping"
  - "--entryPoints.websecure.transport.respondingTimeouts.readTimeout=300s"
  - "--entryPoints.websecure.transport.respondingTimeouts.writeTimeout=300s"
  - "--entryPoints.websecure.transport.respondingTimeouts.idleTimeout=300s"
  - "--entryPoints.websecure.transport.lifeCycle.graceTimeOut=300s"
  - "--entryPoints.web.transport.respondingTimeouts.readTimeout=300s"
  - "--entryPoints.web.transport.respondingTimeouts.writeTimeout=300s"
  - "--entryPoints.web.transport.respondingTimeouts.idleTimeout=300s"
  - "--entryPoints.web.transport.lifeCycle.graceTimeOut=300s"
  - "--serversTransport.forwardingTimeouts.responseHeaderTimeout=300s"
  - "--serversTransport.forwardingTimeouts.idleConnTimeout=300s"

dashboard:
  enabled: true
rbac:
  enabled: true

providers:
  kubernetesCRD:
    enabled: true

ports:
  web:
    port: 8000
    expose: true
    exposedPort: 80
    protocol: TCP
    nodePort: 32080

  websecure:
    port: 8443
    expose: true
    exposedPort: 443
    protocol: TCP
    nodePort: 32443

service:
  enabled: true
  type: NodePort

requests:
  cpu: "100m"
  memory: "100Mi"
limits:
  cpu: "100m"
  memory: "100Mi"

It turns out it was an issue with GKE. There's a default timeout of 30 sec on backend services.

I don't know why, but switching from NodePort to LoadBalancer service fixed the timeout issue. This is what my chart config looks like now:

ingressClass:
  enabled: true
  isDefaultClass: false

additionalArguments:
  - "--log.level=DEBUG"
  - "--providers.kubernetesingress.ingressclass=traefik"
  - "--ping"

dashboard:
  enabled: true
rbac:
  enabled: true

providers:
  kubernetesCRD:
    enabled: true

ports:
  web:
    port: 8000
    expose: true
    exposedPort: 80
    protocol: TCP

  websecure:
    port: 8443
    expose: true
    exposedPort: 443
    protocol: TCP

service:
  enabled: true

requests:
  cpu: "100m"
  memory: "100Mi"

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.