Using traefik 2.2 deployed in k8s with the following args (and the docker.socket mounted as hostPath: Socket volume to the container in my traefik pod, I am able to see both docker swarm and the kubernetes-crd as providers in traefik:
- --providers.docker.swarmmode
- --providers.docker=true
- --providers.kubernetescrd
Traefik has visibility of containers created via both orchestrators and I am able to successfully route to services within kubernetes with ingressroutes/rules.
However I CANNOT route to swarm containers, although I can see docker swarm containers in the dashboard (with the correct IP listed for the container/service and with the correct router associated).
I assume this is a routing issue between the k8s CNI and the docker swarm overlay network as I am not able to ping or connect to any containers on an overlay network from the traefik pod.
Is there any way of running this in a mixed mode so I can proxy both swarm and kubernetes via a single traefik deployment?
I understand this is probable a fairly "out there" request, but the reasoning behind this is that I am trying to migrate 50 odd swarm services to kubernetes and I don't wish to do this in a big bang approach so this seemed like a "fun" thing to try!
I'm sorry, but unless someone who configured it this usual way happens to read this topic there are fairly low chances that you see much activity in your thread. In essence it's a networking problem not a traefik problem, which you proved admirably by:
I am not able to ping or connect to any containers on an overlay network from the traefik pod
So the only advice I can get, is concentrate on this problem on the line above and try to solve it, this is the key. In addition to some networking expertise, which readers of this forum may or may not have, it also requires the knowledge of your network infrastructure, which we most definitively do not know.
I'm personally not a networking expert, and some times ago I was faced with a tricky networking problem - I was trying to set up Network Policies in kubernetes, and some of the pods that needed to be included in the policy used in-cluster access to the API. It took me a week to figure out how these calls are routed on the networking layers and what IPs I have to whitelist. It also turned out to depend on particular networking provider for kubernetes (in my case calico). It was also very difficult to get community support for that issue.
I'm writing all of this, just to show that I can relate that the problem you are facing is difficult. Good luck!