Traefik Mesh not resolving in new AWS EKS deployment

AWS: eks.4
CoreDNS: v1.8.4-eksbuild.1
Mesh: helm-4.0.2

Values:
kubedns: false

No logs from controller;

.:53
maesh.:53
traefik.mesh.:53
[INFO] plugin/reload: Running configuration MD5 = ff9bb101e62cade152ba36d623f72657
CoreDNS-1.8.4
linux/amd64, go1.13.15, 053c4d5c
[INFO] Reloading
[INFO] plugin/reload: Running configuration MD5 = 47d57903c0f0ba4ee0626a17181e5d94
[INFO] Reloading complete

Issue: svc.ns.svc.cluster.local resolves fine; svc.ns.traefik.mesh does not resolve

resolve.conf doesn't show any new pointers in the search

Silly issue: it worked for 30 seconds, no changes, I had four successful curl -Lv svc.ns.traefik.mesh and then it stopped resolving. Controller/proxy contained NO LOGS!

Resolved.

eksctl delete addon --cluster my_cluster --region us-east-1 --name coredns --preserve

coredns configmap was being reverted by eks managed addon.

Sorry for responding to the very old issue, but curious how you figured out it was the addon that was reverting the configmap? I'm having a very similar issue on k3s where DNS resolves for about a minute after deploying the Traefik Mesh helm chart, then something seems to break. Also not getting any helpful logs from the controller/proxy, so any thoughts on where to look would be super appreciated!

I realized that upstream was breaking because the mesh.traefik route would fail to resolve. I simply checked the coredns config after installing and saw the patch, then I checked after it broke and saw it was reverted.

Since coredns is managed, I checked the managed fields:

kubectl get deployment/coredns -n kube-system -o yaml

This led me to realize the configmap patch was overwriting managed fields which would be then overwritten by the coredns deployment.