Traefik Proxy in native Kubernetes, can you have unique health checks per service?

Traefik Proxy (v3.6) deployed with native Kubernetes on non-EKS AWS, how can I define a unique health-check per back-end service that will be used to verify back-end service availability (TCP connect/disconnect or HTTP Get)?

I need to support ingress for both Layer 4 (TCP/TLS) and Layer 7 (HTTP/HTTPS) protocols. TraefikService complains about ExternalName requirement ('healthCheck allowed only for ExternalName service: ')?

I am using helm with an over-rides values file that mirrors the traefik/values.yaml file from the traefik-helm-chart repo. This over-rides file specifies the static configuration of the spec.ports and the metadata.annotations, as well as the static endpoint configurations. I will also be dynamically adding/removing back-end services (listeners) programmatically at run-time that the traefik service will have to pick up and expose (TBD at this time - just informational).

I have followed most/all? of the online docs on Traefik that I could find. I've added the following load-balancer metadata.annnotations to the traefik service:

service.beta.kubernetes.io/aws-load-balancer-id: <arn-of-nlb-to-use>
service.beta.kubernetes.io/aws-load-balancer-namme: <nlb-name>
service.beta.kubernetes.io/aws-load-balancer-tpe: nlb
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-subnets: <comma separated list of public subnetIds>
service.beta.kubernetes.io/aws-load-balancer-load-balancer-target-group-attributes: 'preserve_client_ip.enabled=true,load_balancing.cross_zone.enabled=true'

The predefined NLB arn is properly associated with a hosted zone in Route 53 and it is accessible from outside of Kubernetes/AWS deployments.

I have treafik running in it's own namespace and the back-end services and pods running in another namespace. The traefik pods and service knows about the other namespace. A helm installation of the above config always fails to verify the back-end services??

This configuration is based on the NGINX ingress configuration that is working now, however, we need to validate/authenticate TCP/TLS connectivity to the back-end (and pass-through the client IP to the back-end pod). This is why we are trying to move to Traefik.

Using the Traefik dashboard, Traefik finds the back-end services and ports. From the Traefik dashboard perspective, everything looks good! All of the ports and services from the service/traefik to the traefik/pods to the back-end services seems the line up as expected and looks correct (dashboard displays the proper IP of the respective pods in the HTTP Services>Servers and TCP Services/Servers view).

Any suggestions or examples of how to get traefik to work as expected when deployed to Kubernetes would be greatly appreciated.

Thanks in advance.
Greg