In this guide, Traefik (and the container on which it runs) uses the ports 4443 (for HTTPS routing), 8000 (for HTTP routing), and 8080 (for accessing the API and webUI).
So on the host where you deployed that stack, it is up to you to bind the actual ports of the hosts, to the ports used by the container, similarly to what is explained in this section: https://docs.traefik.io/v2.0/user-guides/crd-acme/#port-forwarding
IngressRoute is just CRD that traefik reads and uses as dynamic configuration source. You still need to setup kubernetes with regards to how you want to setup you ports and how you want kubernetes to expose traefik. Traefik is just a container, you can plug it it anyway you want as long as kubernetes allows it.
So when you say something like the single point of entry in the cluster this would be something that kubernetes would manage, not traefik itself. Once you set up kubernetes this way and get the traffic flow to traefik, traefik will take care of the rest and use IngressRoute for deriving its routing rules.
I also have this question. In Kubernetes, it seems the preferred way to access the cluster externally is by configuring a LoadBalancer Service, or an Ingress. However, when deploying Kubernetes on bare metal (which I am doing) LoadBalancer is not an option. Other options include using a NodePort, but given that the ports reserved for NodePort are in the 30000-32767 range, this is not ideal if you want to run on standard http/https ports. And then configuring kubectl to forward ports is also a possibility, as it basically takes the place of a cloud provider load balancer, but is also not ideal.
So given these constraints, it seems that the best way to expose the cluster to the outside world is using an Ingress. So is my option here to use nginx as an Ingress that forwards all traffic to Traefik, and then let IngressRoutes specify the routing rules from there?
I would prefer to not have to go through nginx at all, if possble.
@zespri Ah in truth, I did not understand what he was saying at first. But yes, that should work also. The downside with that is that you can only run one instance of Traefik per node, which may or may not be an issue.
I did research this a little bit more and it seems like there are a few other options. One is to use a software load balancer, such as MetalLB or to configure nginx as a load balancer. Then you can use the LoadBalancer service type to expose your services. I still need to test these and if I do I will try to come back and update this with more info.
What I ended up doing for now is specifying an externalIP on a ClusterIP service (more info here: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips). Here is an example of the service yaml I used (1.2.3.4 should be replaced by the ip address of a node that is publicly accessible):
How were you planning to run more than one instance of traefik on the same node on the same ports? The only option I'm aware of is to use multiple IP addresses, and it looks like that's something that metallb can help with.
I would be interested in looking at what you read about that. I was not able to find any evidence that you can configure nginx to be a service of LoadBalancer type in kubernetes. If you could link up, where your saw that, I'd appreciate it a lot.
If you have additional external IPs available, it's a great solution, otherwise it's not much different from what was suggested in that post above yours.
I think I tried the External IP some months ago but since I did not come from networking background I was not able to allocate a single ubuntu machine a block of IP addresses and not confuse kubernetes networking at the same time. I'm sure that this is doable by a person who know what they are doing, but I did not have knowledge and time to follow this option through.
@zespri The problem is exactly as you describe. If I configure the container to use the host network, and I then connect to that node on that port, I am connecting directly to that container. But I am limited to one container per node since I am binding to ports on the node directly.
However, if I use the externalIP method, then requests to the ports on that node are first forwarded to the Service, which then forwards the request to one or more of the containers to handle the request. Since the service is binding the ports, rather than the pods, I can run more than one pod on the host.
I could be wrong about nginx being able to be a LoadBalancer type in Kubernetes. I thought I saw something that said that, but I haven't tried it yet, and I cannot find where I saw it now.
Oh you mean like two pods on the same node for load-balancing purposes? It did not occur to me because to it sounds more practical to spread those out across nodes. I thought you wanted to run an independent traefik pod, on the same port, which is to think about it makes little sense. But yes, if that's your goal now I understand what you mean.
well, I need to answer again as I am still confused:
kubectl port-forward
works well. But instead I could (?) use an Ingress-Controller. With the option
--providers.kubernetesingress
I could enable Traefik 2.2 as an Ingress Controller (in addition to the CRD-Controller). Now adding an Ingress "should" get me external access to my Traefik (on ports 8080, 8000, 4443 from the example). Ingress Rule at the front (to publish ports) and then CRD to define all routing further on.
Still I cannot get it running. What am I missing? Can Traefik (installed as a Kubernetes pod) in parallel act as Ingress and CRD?
Kuberentes Ingress and Kubernetes Ingress Route (providers.kubernetesCRD) are two different dynamic configuration providers for traefik. You can use both or either, they are largely independent. Ususally you do not have a reason to use both, as one is enough. With traefik v2 release most options were provided with Kubernetes Ingress Route, but in 2.2 Kuberentes Ingress started to catch up.
If you are using a cloud provider, then you are better off using Ingress Controller that they provide, and refer to their documentation (afaik azure uses traefik behind the scene).
If you are on prem and do not have other ingress controllers, that traefik may help. I refer you to this user guide for an example setup.
Since it's kubernetes which manages egress ports, some kubernetes knowledge is required to make that work. I've been using traefik as kubernetes ingress controller successfully for more than 2 years now in production.