Implementing server load balancing strategies

Hello everyone,

I'm looking into adding load balancing algorithms to traefik (like lowest response time, least utilized, etc.) as part of a research project. I want to do it in a way that the result could maybe find its way into the main tree at some point. I was hoping to get a couple of pointers on how to do it properly.

From my (albeit limited) understanding of the way HTTP load balancing is implemented in traefik, there are two layers: service and server load balancing. Service-level load balancing is implemented in traefik (e.g, the WRR or Mirror), wheres as on server-level traefik leverages the load balancing code (currently only round robin) of vulcand/oxy.

My first question is therefore: is the appropriate project for additional load balancing code on server level oxy or traefik? Our would both be fine? Is there a particular reason oxy was used for load balancing?

My second question: are there any existing proposals to extend the configuration to accommodate additional load balancing algorithms? I played around with the http_config.go and found that there are several ways the ServersLoadBalancer struct could be extended. For example, one could add an Algorithm container to hold the load balancer configuration. For example:

[http]
  # ...
  [http.services]
    [http.services.myapp]
      [http.services.myapp.loadBalancer]
        [http.services.myapp.loadBalancer.algorithm.lowestResponseTime]
          aggregate = "10"
          epsilon = "10ms"

        [[http.services.myapp.loadBalancer.servers]]
          # ...

The algorithm container could be extended with specific algorithm configurations, the default being roundRobin that requires no configuration.

Hoping to get some feedback on this!

Cheers

1 Like

I am pretty shocked nobody was interested in lowestResponseTime configuration option which is essential for any reliable proxy.

What are you implying? Traefik is not "reliable" because of a long missing feature, which you think is essential?