Port-forward works but ingressroutetcp does not

Hello,

I am trying to configure a gRPC service (Signoz Otel collector) using Traefik IngressRouteTCP. I have attempted the following configuration:

helm chart

  service:
    nodeSelector:
      agentpool: traefik
    spec:
      loadBalancerIP: ${var.traefik_service_ip}
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "9100"
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"
  additionalArguments:
  - --log.level=DEBUG
  - --metrics.prometheus=true
  - --metrics.prometheus.entryPoint=metrics
  - --metrics.prometheus.buckets=0.1,0.3,1.2,5.0
  ports:
    web:
      redirectTo: websecure
    otel:
      port: 4317
      expose: true
      exposedPort: 4317
      protocol: TCP

ingressroutetcp

  apiVersion: traefik.containo.us/v1alpha1
  kind: IngressRouteTCP
  metadata:
    name: signoz-otel
    namespace: telemetry
  spec:
    entryPoints:
    - otel
    routes:
    - match: HostSNI(`*`)
      kind: Rule
      services:
      - name: signoz-otel-collector
        kind: Service
        port: otlp
        nativeLB: false

It should be noted that kubectl -n telemetry port-forward <pod-name> 4317:4317 results in a working Signoz Otel collector. I have tried both named and unnamed ports, and with and without nativeLB flag.

Any suggestions as to how to achieve what should be relatively simple would be greatly appreciated!

Thanks,
Brian

Some debug logs showing root cause, but no further enlightenment:

Error while handling TCP connection: readfrom tcp 10.32.4.59:59892->10.32.4.51:4317: read tcp 10.32.4.59:4317->10.32.4.179:28136: read: connection reset by peer

These IPs make no sense to me. This is clearly the source of my problem, but there's nothing here to suggest why.

I suppose in retrospect, there's no reason to use an IngressRouteTCP except to save an IP address. This is most effectively solved by exposing a LoadBalancer service and cutting Traefik out altogether.