forwardedHeaders.insecure to true not working

Hi, I have nginx terminating SSL and forwarding to traefik in a k3s cluster. Traefik is overwriting the X-Forwarded-* headers and passing on X-Forwarded-Proto: http instead of passing this through from nginx.

How do I enable forwardedHeaders.insecure to true using the K3s helm chart thats installed by default. Apologies if this should be asked over at Rancher. But I thought I would start here first.

I have created a trafik-config.yaml file in /var/lib/rancher/k3s/server/manifests/ with

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    entryPoints:
      web:
        address: ":80"
        forwardedHeaders:
          insecure: true

This appears to be picked up and the helm-install-traefik pod is detecting the change in config and restarting but I never get the forwarded headers from nginx.

Any help would be appreciated.

This is the applied config, but still I am not getting the forwarded headers from the upstream Nginx

additionalArguments: []
additionalVolumeMounts: []
affinity: {}
autoscaling:
  enabled: false
deployment:
  additionalContainers: []
  additionalVolumes: []
  annotations: {}
  enabled: true
  imagePullSecrets: []
  initContainers: []
  kind: Deployment
  labels: {}
  podAnnotations: {}
  podLabels: {}
  replicas: 1
entryPoints:
  web:
    forwardedHeaders:
      insecure: true
env: []
envFrom: []
experimental:
  kubernetesGateway:
    appLabelSelector: traefik
    certificates: []
    enabled: false
  plugins:
    enabled: false
forwardedHeaders:
  enabled: true
  insecure: true
  trustedIPs:
  - 192.168.1.0/16
global:
  systemDefaultRegistry: 0
globalArguments:
- --global.checknewversion
- --global.sendanonymoususage
hostNetwork: false
image:
  name: rancher/library-traefik
  pullPolicy: IfNotPresent
  tag: ""
ingressClass:
  enabled: false
  isDefaultClass: false
ingressRoute:
  dashboard:
    annotations: {}
    enabled: true
    labels: {}
logs:
  access:
    enabled: false
    fields:
      general:
        defaultmode: keep
        names: {}
      headers:
        defaultmode: drop
        names: {}
    filters: {}
  general:
    level: ERROR
nodeSelector: {}
persistence:
  accessMode: ReadWriteOnce
  annotations: {}
  enabled: false
  name: data
  path: /data
  size: 128Mi
pilot:
  enabled: false
  token: ""
podAnnotations:
  prometheus.io/port: "8082"
  prometheus.io/scrape: "true"
podDisruptionBudget:
  enabled: false
podSecurityContext:
  fsGroup: 65532
podSecurityPolicy:
  enabled: false
ports:
  traefik:
    expose: false
    exposedPort: 9000
    port: 9000
    protocol: TCP
  web:
    expose: true
    exposedPort: 80
    forwardedHeaders:
      insecure: true
    port: 8000
    protocol: TCP
  websecure:
    expose: true
    exposedPort: 443
    forwardedHeaders:
      insecure: true
    port: 8443
    protocol: TCP
    tls:
      certResolver: ""
      domains: []
      enabled: true
      options: ""
priorityClassName: system-cluster-critical
providers:
  kubernetesCRD:
    enabled: true
    namespaces: []
  kubernetesIngress:
    enabled: true
    namespaces: []
    publishedService:
      enabled: true
rbac:
  enabled: true
  namespaced: false
resources: {}
rollingUpdate:
  maxSurge: 1
  maxUnavailable: 1
securityContext:
  capabilities:
    drop:
    - ALL
  readOnlyRootFilesystem: true
  runAsGroup: 65532
  runAsNonRoot: true
  runAsUser: 65532
service:
  annotations: {}
  enabled: true
  externalIPs: []
  labels: {}
  loadBalancerSourceRanges: []
  spec: {}
  type: LoadBalancer
serviceAccount:
  name: ""
serviceAccountAnnotations: {}
ssl:
  enabled: true
  permanentRedirect: false
tlsOptions: {}
tolerations:
- key: CriticalAddonsOnly
  operator: Exists
- effect: NoSchedule
  key: node-role.kubernetes.io/control-plane
  operator: Exists
- effect: NoSchedule
  key: node-role.kubernetes.io/master
  operator: Exists
volumes: []

I am having the same issue. Maybe there's something missing. All requests comes from 10.42.0.1 which is the host IP, i think of the single node (also control plane), k3s kubernetes deployment.

1 Like

You may already have solved this. I'm just gonna share my solution for closure.

All of the important bits are in the additionalArguments. It seems it does not work if it's in the ports section of the yaml.

# helm upgrade --namespace=traefik --values=traefik/traefik-values.yaml traefik traefik/traefik

image:
  name: traefik
  pullPolicy: Always
  tag: v2.5.3

pilot:
  enabled: false

ports:
  web:
    port: 8000
    expose: true
    exposedPort: 80
    protocol: TCP
    tls:
      passthrough: true
      enabled: false
  websecure:
    port: 8443
    expose: true
    exposedPort: 443
    protocol: TCP
    tls:
      passthrough: true
      enabled: false

additionalArguments:
  - "--entryPoints.web.forwardedHeaders.insecure"
  - "--entrypoints.websecure.forwardedHeaders.insecure"
  - "--entryPoints.web.proxyProtocol.insecure"
  - "--entryPoints.websecure.proxyProtocol.insecure"
  - "--log.level=error"