Hi folks, I experienced a strange behaviour using tcp IngressRoute, to expose a simple mariadb pod (kubernetes cluster v1.34, bare-metal, installed via kubeadm).
This is my deploy:
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: mysql-ingressroutetcp
namespace: mysql-staging
spec:
entryPoints:
- mysql
routes:
- match: HostSNI(`*`)
services:
- name: mysql
port: 3306
---
kind: Service
apiVersion: v1
metadata:
name: mysql
namespace: mysql-staging
spec:
ports:
- name: mysql
port: 3306
targetPort: 3306
selector:
app: mysql
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: mysql
namespace: mysql-staging
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mariadb:11.8
imagePullPolicy: Always
ports:
- name: mysql
containerPort: 3306
protocol: TCP
env:
- name: MARIADB_ROOT_PASSWORD
value: MySecretPassword
resources: {}
restartPolicy: Always
This setup is “quite” working, but it introduce a 60 seconds (exactly) delay for the connection to service from outside the cluster:
$ time mysql -h ip.addres.of.traefik.load.balancer -u root -p << EOF
> quit
> EOF
Enter password:
real 1m2.624s
user 0m0.052s
sys 0m0.010s
(connection is successful!)
Instead every web service deployed on that cluster (via traefik IngressRoute) is working correctly, with no delay.
It smell as a timeout somewhere. Any ideas?
TIA,
Giulio