Migrating from 1.7 to 2.8.5 requires some additional middleware configuration.
I'm unable to create statically configured middlewares using the kubernetes ingress provider. Below is a simplified version of our configuration to replicate the error.
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: traefik-2-internal
namespace: traefik-internal
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-2-internal
rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- traefik.containo.us
resources:
- middlewares
- middlewaretcps
- ingressroutes
- traefikservices
- ingressroutetcps
- ingressrouteudps
- tlsoptions
- tlsstores
- serverstransports
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-2-internal
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-2-internal
subjects:
- kind: ServiceAccount
name: traefik-2-internal
namespace: traefik-internal
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-2-internal-config
namespace: traefik-internal
data:
traefik.yml: |
accesslog:
format: json
bufferingsize: 100
log:
level: INFO
format: json
entrypoints:
http:
address: ":33180"
http:
middlewares:
- traefik-internal-compress@kubernetescrd
transport:
respondingtimeouts:
idletimeout: 902s
readtimeout: 0s
writetimeout: 0s
lifecycle:
gracetimeout: 902s
requestacceptgracetimeout: 30s
providers:
kubernetesingress:
labelselector: traefik=internal-https
ingressendpoint:
hostname: internalalb.elb.amazonaws.com
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-2-internal
namespace: traefik-internal
labels:
app: traefik-2
spec:
selector:
matchLabels:
name: traefik-2-internal
minReadySeconds: 15
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
app: traefik-2
name: traefik-2-internal
spec:
serviceAccountName: traefik-2-internal
priorityClassName: high-priority
securityContext:
runAsUser: 1000
fsGroup: 2000
runAsNonRoot: true
containers:
- image: traefik:2.8.5
name: traefik-2-internal
ports:
- name: http
containerPort: 33180
hostPort: 33180
volumeMounts:
- name: config-volume
mountPath: /etc/traefik
volumes:
- name: config-volume
configMap:
name: traefik-2-internal-config
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: compress
namespace: traefik-internal
spec:
compress: {}
Then, for each matched ingress object, traefik prints the following error
{"entryPointName":"http","level":"error","msg":"middleware \"traefik-internal-compress@kubernetescrd\" does not exist","routerName":"http-my-fine-service-com@kubernetes",...}
With a corresponding ingress object definition:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-fine-service-ingress
labels:
traefik: internal-https
spec:
rules:
- host: my-fine-service.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-fine-service
port:
name: http
As far as I can see, I've correctly formatted the router definition based on k8s namespace specific crd-definitions. I've been banging my head for a day now debugging this, and can't seem to pinpoint the issue.