idleTimeout configuration in k8s not working - stream is closed after 60 seconds

We want to be able to close idle streams in order to detect that client disconnected. TCP can be closed without sending FIN message, and the TCP detection time is very long, we need to configure shorter idle timeout so we can detect such disconnection fast. all of our streams send keepalive message every 15 sec to keep the stream alive

we tried to do it like so :

apiVersion: traefik.io/v1alpha1
kind: ServersTransport
metadata:
  name: <ServersTransportName>
  namespace: <namespace>
spec:
  forwardingTimeouts:
    idleConnTimeout: 60s
    responseHeaderTimeout: 0s
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: <IngressRouteName>
  namespace: <namespace>
spec:
  entryPoints:
    - websecure
  routes:
    - match: HostRegexp(`<sub-domain>.<domain>.com`) && PathPrefix(`/api.somePath`)
      kind: Rule
      services:
        - name: <name>
          port: 8082
          scheme: h2c
          serversTransport: <ServersTransportName>
  tls:
    options:
      name: option-tls
    store:
      name: store-tls

but still the connection is closed after 60 even though we send keep alive every 15 seconds ?
am i missing something ?

@bluepuma77 i see that you are very active here, can you help me with this issue? sorry if its frowned upon tagging you like this directly but I'm stuck on this issue for days now.

No, sorry, just using Docker, the k8s config is really different. You can try Reddit.

1 Like

Hello @daniel-lu,

The configuration you set allows customizing the connections between Traefik and your backends.
The configuration you may need could be applied to the entrypoints to manage incoming connections.

You can find documentation about the existing options in this documentation.

Hi @nicomengin and thanks for the replay,

I have a follow up question:

I have a lot of traffic coming into the cluster via the traefik-ingress - some of the traffic needs idletimout like the grpc streams and the keep alive but others don't.

all of this traffic is coming via the websecure entrypoint via port 443.

I know I can't have two entrypoints with the same IP address.

is there a solution without creating a new entrypoint and forcing the traffic to go via another port and not 443?

also if I understand correctly all these configurations for the entry point I can do only inside the values.yaml using my helm and not directly in the IngressRoute, right ?