Strange 60 seconds timeout using IngressRouteTCP

Hi folks, I experienced a strange behaviour using tcp IngressRoute, to expose a simple mariadb pod (kubernetes cluster v1.34, bare-metal, installed via kubeadm).

This is my deploy:



apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
  name: mysql-ingressroutetcp
  namespace: mysql-staging
spec:
  entryPoints:
    - mysql
  routes:
  - match: HostSNI(`*`)
    services:
    - name: mysql
      port: 3306
---
kind: Service
apiVersion: v1
metadata:
  name: mysql
  namespace: mysql-staging
spec:
  ports:
    - name: mysql
      port: 3306
      targetPort: 3306
  selector:
    app: mysql
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: mysql
  namespace: mysql-staging
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mariadb:11.8
          imagePullPolicy: Always
          ports:
            - name: mysql
              containerPort: 3306
              protocol: TCP
          env:
          - name: MARIADB_ROOT_PASSWORD
            value: MySecretPassword
          resources: {}
      restartPolicy: Always

This setup is “quite” working, but it introduce a 60 seconds (exactly) delay for the connection to service from outside the cluster:

$ time mysql -h ip.addres.of.traefik.load.balancer -u root -p << EOF
> quit
> EOF
Enter password:

real    1m2.624s
user    0m0.052s
sys     0m0.010s

(connection is successful!)

Instead every web service deployed on that cluster (via traefik IngressRoute) is working correctly, with no delay.

It smell as a timeout somewhere. Any ideas?

TIA,

Giulio

Check post.

@bluepuma77 referred post talks about a service that take more than a minute to complete, and traefik router time out.

My scenario is different: the service is immediately responding (it’s a simple connection to a mariadb server), the timeout is introduced by something in the middle (and in the middle there’s only traefik tcp router)

To me it's still not clear when this happens and what your setup looks like. With k8s there is a lot of configuration (ingress, gateway, etc.), Traefik as reverse proxy in between with security timeouts. Is database using shared storage? Maybe clients using connection pooling, automatically trying to re-establish a connection.

Now I managed to make it work correctly. My setup is helm based, with this snippet in values yaml file:

ports:
  mysql:
    port: 3306
    hostPort:
    containerPort:
    expose:
      default: true
    exposedPort: 3306
    targetPort:
    protocol: TCP

Before my (not working) setup was:

ports:
  mysql:
    port: 3306
    expose:
      default: true
    exposedPort: 3306

I don’t know if this change made the magic, because during helm upgrade my traefik version has also been raised from 3.5.x to 3.6.x.

Anyway mysql service is now responsive, with no delay/timeout (it was only a test deployment, with no pv/pvc or other stuff).

Thank you for your time.