Allow/Deny IPs in middleware

Hi
I'm using traefik as ingress (through ingress-routes) on my kubernetes cluster.
So far, the routes have only been exposed to my local network but I want to expose (some) of the services to the internet.
Before moving to kubernetes, I had the services exposed through nginx. On the services that I only wanted to expose locally, I simply added an allow/deny list. I'm looking to do the same thing with traefik, only allow certain IPs or IP ranges and otherwise return a 403.

I know the ipWhiteList exists, but this only looks at the entries in X-Forwarded-For. This header however does not exist. I think that this is because the clients are directly hitting traefik rather than through a proxy, though it may not be there because I'm misunderstanding something.

What would be an alternative way to allow/deny IPs/IP ranges for specific ingress-routes?

Hi @Cyborgium ,

I'm glad you're giving Traefik a run for your Kubernetes environment. I believe that using ip deny / allow lists is not the way to go to separate internal to public traffic in Kubernetes environment, or even cloud environments in general, and that could be bias from your familiarity with Nginx, but feel free to disagree of course :slight_smile:

Here is my take on it, first you need to understand the concept of a service in Kubernetes and how it handles publishing on your cluster as that affects Traefik as well. Basically you have a few options to expose a service on Kube that allows you to reach it from the outside, what we are most interested in for Traefik as an Ingress controller would be either NodePort or LoadBalancer.

NodePort would bind the host port of the kubernetes node running the pod to the pod directly, here you need to take care of the load balancing in front of the service. On the other hand LoadBalancer would rely on your Kubernetes cloud provider ability to provision an external load balancer that would do the round robin between all available pods for that service.

Considering you deployed Traefik with the helm chart and the default values it will be using a LoadBalancer type.

Now Traefik has the concept of entrypoints, which are basically the port that it will listen on the container/pod for incoming connections. You can configure multiple entrypoints with different options.

Given what I pointed about Kubernetes services you can realize that for each entrypoint in Traefik there must be a correlation to map a k8s service port to it, it could be separate service definitions or just the addition of another port mapping on the existing service.

And for me here is the correct way to achieve isolation, you create a separate service definition that also targets Traefik but this time with a service type ClusterIP, which will not allocate an external load balancer or expose the port on the node, instead it only allocates an ip address valid inside your kubernetes cluster that can be accessed by any other services running in it.

Here are snippets of an example:

#Traefik static configuration
entryPoints:
  web:
    address: ":80"
  websecure:
    address: ":443"
  internal:
    address: ":6000"
#Kubernetes service manifest
apiVersion: v1
kind: Service
metadata:
  name: "proxy-svc"
  namespace: traefik
  labels:
    app: traefik
spec:
  type: LoadBalancer
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: https
  selector:
    app: traefik
---
apiVersion: v1
kind: Service
metadata:
  name: "proxy-svc"
  namespace: traefik
  labels:
    app: traefik
spec:
  type: ClusterIP
  ports:
    - name: internal
      port: 6000
      targetPort: 6000
  selector:
    app: traefik

Then on your ingress routes you can always specify the entrypoint it is attached to:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: whoami
spec:
  entryPoints:
    - internal
  routes:
  - match: Host(`whoami.docker.localhost`)
    kind: Rule
    services:
    - name: whoami
      port: 80
1 Like

Thank you for your detailed response!

I do indeed have traefik running on a loadbalancer IP.

A couple hours after creating this topic, I found out that using the ipWhiteList without any strategy, it returns a 403 if the value of the Real-IP header is not in the whitelist. This would definitely be an option for me.

I do like your suggested way of doing things as well, just have another entrypoint that's only accessible from the local network, and the services that are accessible from both within and outside of the local network, just have two entrypoints. I suppose that it is also more robust as it doesn't rely on the loadbalancer to not update the Real-IP header (which it does in some cases).

1 Like