Docker didn’t invent application containers, but it put them on the map. Docker software helped containers become the most important innovation in application delivery since virtual machines. And with the introduction of Swarm mode, Docker added container orchestration to its toolbox, making it possible to build distributed applications from large collections of containerized microservices.
Docker Swarm has a lot going for it. For starters, it’s built into the Docker Engine, so it’s available to anyone who can launch Docker containers. It’s also easy to get up and running in a variety of cloud and on-premises environments, unlike more complex tools such as Kubernetes and Mesos.
Unfortunately, Swarm alone will only get you partway to production. Its “routing mesh” networking model doesn’t supply the availability, security, observability, and control that you’ll need when exposing your applications to the open internet, particularly as they scale to more complex usage scenarios.
Load Balancing and Reverse Proxies
The smart way to gain these capabilities is to pair Docker Swarm with external reverse proxy and load balancing software. These networking tools act as concierge for requests coming from the external network. They not only route requests to where they need to go (also known as ingress routing), but they also help ensure that backend applications and services don’t become overwhelmed by traffic spikes.
At its most basic, this type of routing software handles HTTP requests, which sit at Layer 7 of the OSI networking model, known as the application layer. More advanced load balancers extend their capabilities to Layer 4, the transport layer, by also handling TCP requests.
Traefik is one such offering that’s particularly well-suited for use with Docker Swarm. Like Swarm mode itself, Traefik aims to eliminate much of the drudgery of maintaining containerized environments by automating routine configuration tasks.Traefik automatically discovers information about the network and services available in a Docker Swarm cluster, dynamically updating its configuration as the environment changes. This sets it apart from Interlock, the ingress routing component of the commercial Docker Enterprise product, which can become unreliable when its configuration is updated. At the same time, Traefik offers comprehensive observability into the functioning of the network, so operations teams are never left in the dark.
The ability to update network routes efficiently is particularly important for microservices deployments. By their nature, microservices tend to be stateless and short-lived. New versions of services are typically deployed frequently and instances are scaled dynamically to meet demand. Because of this, routers and load balancers should be able to respond quickly as new container instances appear and disappear.
This capability of external routers also makes it easy to test new fixes and feature upgrades in ways that are not possible with Docker Swarm alone, including blue-green deployments, canary releases, and similar methods. By crafting routing rules that split traffic between old and new versions of services in user-defined proportions, it’s possible to roll out updates gradually and even roll them back when necessary with zero downtime.
Another important feature of a reverse proxy is the ability to terminate encrypted TLS traffic. Users have come to expect the HTTPS URL and padlock icon that indicate secure connections. In this aspect, Traefik not only supports TLS but – in keeping with the Docker Swarm ethos of easy configuration – it also supports automated certificate management via a built-in client for Let's Encrypt.
Future-Proof Your Apps
Separating the functions of networking and container orchestration has benefits for application lifecycles, too. As an application scales and evolves, inevitably its infrastructure needs will also change. Because Traefik works consistently across on-premises and public cloud environments, it makes it simple to port your Docker Swarm clusters when the time comes, without dramatic configuration changes.
The commercial product Traefik Enterprise Edition (TraefikEE) also includes features aimed at enterprise deployments. For example, it supports a variety of identity and authentication protocols, including LDAP, JWT, and OpenID Connect. It runs as a cluster for high availability, including clustered support for Let’s Encrypt. And it’s fully compliant with Docker’s Universal Control Plane, giving operations teams centralized control of networking as part of the underlying infrastructure.
And we must not ignore that it may become necessary to move applications away from Docker Swarm to a more feature-rich orchestrator, such as Kubernetes. A Traefik configuration can be ported from Docker Swarm to Kubernetes without significant changes, meaning the choice you make for routing and load balancing today will not negatively affect your future plans. By comparison, Interlock only works with Docker Enterprise.
The Route Forward
Containers are likely to remain the dominant means of application deployment for years to come, and with good reason. What’s more, as the microservices model of application development gains traction, container orchestration will increasingly become an essential component of IT infrastructure.
Networking, on the other hand, is essential now and for the future. External reverse proxy and load balancing software offer networking features and control that an orchestration layer like Docker Swarm cannot provide on its own. What’s more, the auto-discovery and configuration capabilities of Traefik make it an ideal partner for Docker Swarm and Kubernetes alike.
To learn more about using Traefik as an ingress proxy and load balancer for Docker Swarm environments, join our next webinar, “Application Routing for Docker Swarm,” airing live on July 22, 2020.
This is a companion discussion topic for the original entry at https://containo.us/blog/traefik-and-docker-swarm-a-dynamic-duo-for-cloud-native-container-networking/