When deployed as a Kubernetes ingress controller, Traefik can process and route many thousands of requests without complaint. And yet, for the operations team, visibility into what's happening behind the scenes is essential. Is the application healthy? Is it working as intended? Monitoring distributed systems is one of the core principles of the set of practices known as site reliability engineering (SRE).
This first in a series of posts on Traefik and SRE techniques explores how Traefik's built-in logging features can help provide needed visibility. When combined with the set of open-source projects known as the Elastic Stack – including Elasticsearch, Kibana, and Filebeat, among others – Traefik becomes part of a rich set of tools for network log analysis and visualization.
Prerequisites
If you'd like to follow along with this tutorial on your own machine, you'll need a few things first:
-
A Kubernetes cluster running at
localhost
. One way to achieve this is to create a local cluster running in Docker containers using K3d (making sure to disable the default Traefik 1.7 ingress controller):k3d cluster create --k3s-server-arg "--disable=traefik" -p "80:80@loadbalancer"
-
Traefik 2.x installed and running on the cluster. The recommended method is using the official Helm chart. For this demo, make sure the Traefik pods are in the
kube-system
namespace:helm install traefik traefik/traefik -n kube-system
-
The
kubectl
command-line tool installed and configured to access your cluster. (If you created your cluster using K3d and the instructions above, this will already be done for you.)
You will also need the set of configuration files that accompany this article, which are available on GitHub:
git clone https://github.com/traefik-tech-blog/traefik-sre-logging.git
Set Up Elastic Logging
Traefik automatically generates access logs, but in raw form they're of limited utility. This tutorial demonstrates how to ingest these logs into Elasticsearch for lookup and aggregation, which in turn will enable you to create visualizations using Kibana charts.
This means you must first deploy Elasticsearch, Kibana, and Filebeat to your cluster, which you can do using their respective Helm charts.
Deploy Elastic
Add and update the Elastic Helm charts repository using the following commands:
$ helm repo add elastic https://helm.elastic.co
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "elastic" chart repository
Update Complete. ⎈Happy Helming!⎈
Elasticsearch requires a volume to store logs. The default Helm configuration specifies a 30GiB volume using standard
as the storageClassName
. Unfortunately, while the standard
StorageClass is available on Google Cloud Platform, it is not available on K3s by default. To find an alternative, do a lookup to determine which StorageClass is available:
$ kubectl get storageClass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 97m
The following configuration sets up Elasticsearch to use the local-path
StorageClass with the following attributes:
- 100MB storage size
- Reduced
CPU
andmemory
limits
# elastic-values.yaml
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 100M
Deploy Elasticsearch with the above configuration using Helm:
$ helm install elasticsearch elastic/elasticsearch -f ./elastic-values.yaml
NAME: elasticsearch
LAST DEPLOYED: Sun Jan 10 12:23:30 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
$ kubectl get pods --namespace=default -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
$ helm test elasticsearch
Note that it may take several minutes for the Elasticsearch pods to become available, so be patient.
Deploy Kibana
The Elastic repository also provides Helm charts for Kibana. As with Elasticsearch, you'll want to configure Kibana with the following values:
-
100MB
storage size onlocal-path
StorageClass - Reduced
CPU
andmemory
limits
Deploy Kibana with the above configuration using Helm:
$ helm install kibana elastic/kibana -f ./kibana-values.yaml
NAME: kibana
LAST DEPLOYED: Sun Jan 10 14:50:50 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
Once all the pods are up and running, before you can access the Elastic dashboard, you'll need to deploy an IngressRoute to expose it on your cluster:
$ kubectl apply -f kibana-ingressroute.yaml
ingressroute.traefik.containo.us/kibana created
To test the configuration, try accessing the dashboard with your web browser at kibana.localhost
:
Deploy Filebeat
Next, deploy Filebeat as a DaemonSet to forward all logs to Elasticsearch. As with the other components, you'll configure Filebeat with the following values:
-
100MB
storage size onlocal-path
StorageClass - reduced
CPU
andmemory
limits
Deploy Filebeat with the above configuration options using Helm:
$ helm install filebeat elastic/filebeat -f ./filebeat-values.yaml
NAME: filebeat
LAST DEPLOYED: Sun Jan 10 16:23:55 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Watch all containers come up.
$ kubectl get pods --namespace=default -l app=filebeat-filebeat -w
Demo Application
Finally, now that the Elastic Stack components are installed on your cluster, you'll need an application to monitor. The HttpBin service provides many endpoints that you can use to generate various types of traffic, which can be useful for generating visualizations. You can deploy the service and the appropriate IngressRoute using a single configuration file:
$ kubectl apply -f httpbin.yaml
deployment.apps/httpbin created
service/httpbin created
ingressroute.traefik.containo.us/httpbin created
Once the pods are created, you can access the application with your browser at httpbin.localhost
and try some requests:
Connect Traefik and Kibana
Now it's time to link Traefik and Kibana so you can interpret Traefik's logs in a meaningful way. In these next steps, you'll configure both applications to extract the information you want from Traefik and get it ready to visualize as Kibana graphs.
Configure Traefik Access Logs
Traefik access logs contain detailed information about every request it handles. By default, these logs are not enabled. When they are enabled, Traefik writes the logs to stdout
by default, which intermingles the access logs with Traefik-generated application logs.
To address this issue, you should update the deployment to generate logs at /data/access.log
and ensure that they are written in JSON format. Here is what that configuration looks like:
# patch-traefik.yaml
- args:
- --global.checknewversion
- --global.sendanonymoususage
- --entryPoints.traefik.address=:9000/tcp
- --entryPoints.web.address=:8000/tcp
- --entryPoints.websecure.address=:8443/tcp
- --api.dashboard=true
- --accesslog
- --accesslog.format=json
- --accesslog.filepath=/data/access.log
- --ping=true
- --providers.kubernetescrd
- --providers.kubernetesingress
name: traefik
Once the logs are written to a file, they must also be exported for Filebeat. There are many ways to do this. Since you have deployed Filebeat as a DaemonSet, you can add a simple sidecar to tail on the access.log
. This is a minimalistic setup:
# patch-traefik.yaml
- args:
- /bin/sh
- -c
- tail -n+1 -F /data/access.log
image: busybox
imagePullPolicy: Always
name: stream-accesslog
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
Patch the Traefik deployment to make all of the above changes using the provided configuration file:
$ kubectl patch deployment traefik -n kube-system --patch-file patch-traefik.yaml
deployment.apps/traefik patched
Kibana Dashboard
Next, to begin building your dashboard in Kibana you'll need to configure index patterns. You can do this with the following steps.
First, open the menu with three lines in the top left corner of the screen and choose Kibana > Overview
. From the Kibana Overview page, select "Add your data" and click "Create index pattern":
Define the index pattern named filebeat-**
to match the filebeat
indexes:
Click "Next step" and select @timestamp
as the primary time field from the drop-down menu:
When you click "Create index pattern", the index summary page will show the updated fields. You will be able to use these fields in Kibana queries on the dashboard page:
Now, if you click the menu with three lines in the top-left corner of the screen and choose Kibana > Discover
, you should see a preliminary graph of all of the ingested logs.
To narrow them down to just the useful ones, choose "Add filter", enter kubernetes.pod.name
into the Field drop-down, choose "is" from the Operator pull-down, and select the appropriate traefik
pod name from the "Value" drop-down to see just the log entries created by it:
At this stage, however, if you expand any given message field (by clicking on the arrow to the left of its timestamp), you'll see that the JSON log entry is stored as a single item, message
, which does not serve the purpose of analyzing Traefik logs.
To fix this, you'll need to have Filebeat ingest the complete message as separate JSON fields. There are many ways to accomplish this, but one way is to update the Filebeat plugin to use the decode-json
processor, like so:
# filebeat-chain-values.yaml
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 1
target: ""
overwrite_keys: false
You can update the processor chain with the above configuration options using helm upgrade
with the supplied configuration file:
$ helm upgrade filebeat elastic/filebeat -f ./filebeat-chain-values.yaml
Release "filebeat" has been upgraded. Happy Helming!
NAME: filebeat
LAST DEPLOYED: Sun Jan 10 18:04:54 2021
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
1. Watch all containers come up.
$ kubectl get pods --namespace=default -l app=filebeat-filebeat -w
Now the logs in Kibana will project each JSON field as a separate query field. But there's still a problem! You'll notice yellow triangles next to the fields, and when you hover the cursor over them, you'll see a warning message that "No Cache mapping exists for the field":
To fix this, Navigate back to your filebeat-**
the "Index summary" page under Management > Stack Management > Kibana > Index Patterns
. On the top left side, next to the red trash can icon, there is a refresh icon. Click it to refresh the field list.
As you will see when you return to the Kibana > Discover
page, now all Traefik-generated log fields are available in Kibana for query.
Simulate Load
Logs are meaningless if they have no events to log, so go ahead and play with the HttpBin service you installed earlier by accessing it at httpbin.localhost
to generate some traffic, or try executing scripts like these to access the service in loops:
for ((i=1;i<=10;i++)); do curl -s -X GET "http://localhost/get" -H "accept: application/json" -H "host: httpbin.localhost" > /dev/null; done
for ((i=1;i<=10;i++)); do curl -s -X POST "http://localhost/post" -H "accept: application/json" -H "host: httpbin.localhost" > /dev/null; done
for ((i=1;i<=20;i++)); do curl -s -X PATCH "http://localhost/patch" -H "accept: application/json" -H "host: httpbin.localhost" > /dev/null; done
Kibana Charts
Now you can begin creating some visualizations. Traefik-generated access logs contain a diverse set of fields. To generate a chart of the breakdown of overall request load, navigate to Kibana > Visualize
, choose "Create new visualization," and click "Go to lens."
From there, find the field "RequestPath" from the selections on the left and drag it to the square in the middle of the screen. Choose "Donut" as the style of graph from the drop-down, and you should see a graph that looks something like this:
This chart shows all requests handled by Traefik. If you'd like to narrow it down, you can add filters. For example, choose "RouterName", select "exists" as your operator, and click "Save". Then choose "RequestHost", select "is" as your operator, filter by httpbin.localhost
from the drop-down, and click "Save" again. Now your chart will look something like this:
Traefik has also generated average duration times for all the requests served by the HttpBin application. Try dragging and dropping the "Duration" field to your chart and selecting "Bar Graph" as your chart type:
Summary
This simple example serves to demonstrate how Traefik's comprehensive logging capabilities, combined with the open-source Elastic Stack, can be a powerful tool for visualizing and understanding the health and performance of services running on Kubernetes clusters. Many more graphs are possible than the ones shown here, so dive in and explore.
Future installments of this SRE series will cover how to use other open source tools to monitor Traefik metrics and do tracing across microservices, so tune back in over the coming weeks.
As usual, if you love Traefik and there are features you'd like to see in future releases, open a feature request or get in touch on our community forums. And if you'd like to dig deeper into how your Traefik instances are operating, check out Traefik Pilot, our SaaS monitoring and management platform.
This is a companion discussion topic for the original entry at https://traefik.io/blog/log-aggregation-for-traefik-and-kubernetes-with-the-elastic-stack/