Http works, but when I visit https it gives me: 404 page not found

Hi everyone, hope someone can help!

I have installed traefik via the helm chart.
I deployed a simple php app that just displays text.
I can visit http://mydomain and it shows the text.
when I visit https://mydomain it says 404 page not found.
The browser also warms me first that there is not valid cert.

I do not have https lets encrypt configured but I think the page should still show me the text when I accept the browsers warning.

Here is the applications ingress file:


kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  namespace: example
  name: example-ingress
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web, websecure
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        backend:
          serviceName: example-app
          servicePort: 80

and the service file:


apiVersion: v1
kind: Service
metadata:
  namespace: example
  name: 'example-app'
spec:
  type: ClusterIP
  ports:
    - protocol: TCP
      name: web
      port: 80
      targetPort: 80
    - protocol: TCP
      name: websecure
      port: 443
      targetPort: 443
  selector:
    app: 'example-app'

and the app replicaset file:


apiVersion: apps/v1
kind: ReplicaSet
metadata:
  namespace: example
  name: 'example-app-main'
  labels:
    app: 'example-app-main'
    tier: 'frontend'
spec:
  replicas: 1
  selector:
    matchLabels:
      app: 'example-app'
  template:
    metadata:
      labels:
        app: 'example-app'
    spec:
      containers:
      - name: example-app-container
        image: richarvey/nginx-php-fpm:latest
        imagePullPolicy: Always
        env: details-here
        ports:
          - containerPort: 80

The traefik helm value file is here:


# Default values for Traefik
image:
  name: traefik
  tag: 2.2.8
  pullPolicy: IfNotPresent

#
# Configure the deployment
#
deployment:
  enabled: true
  # Number of pods of the deployment
  replicas: 1
  # Additional deployment annotations (e.g. for jaeger-operator sidecar injection)
  annotations: {}
  # Additional pod annotations (e.g. for mesh injection or prometheus scraping)
  podAnnotations: {}
  # Additional containers (e.g. for metric offloading sidecars)
  additionalContainers: []
  # Additional initContainers (e.g. for setting file permission as shown below)
  initContainers: []
    # The "volume-permissions" init container is required if you run into permission issues.
    # Related issue: https://github.com/containous/traefik/issues/6972
    # - name: volume-permissions
    #   image: busybox:1.31.1
    #   command: ["sh", "-c", "chmod -Rv 600 /data/*"]
    #   volumeMounts:
    #     - name: data
    #       mountPath: /data
  # Custom pod DNS policy. Apply if `hostNetwork: true`
  # dnsPolicy: ClusterFirstWithHostNet

# Pod disruption budget
podDisruptionBudget:
  enabled: false
  # maxUnavailable: 1
  # minAvailable: 0

# Create an IngressRoute for the dashboard
ingressRoute:
  dashboard:
    enabled: true
    # Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class)
    annotations: {}
    # Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels)
    labels: {}

rollingUpdate:
  maxUnavailable: 1
  maxSurge: 1


#
# Configure providers
#
providers:
  kubernetesCRD:
    enabled: true
  kubernetesIngress:
    enabled: true
    # IP used for Kubernetes Ingress endpoints
    publishedService:
      enabled: false
      # Published Kubernetes Service to copy status from. Format: namespace/servicename
      # By default this Traefik service
      # pathOverride: ""

#
# Add volumes to the traefik pod.
# This can be used to mount a cert pair or a configmap that holds a config.toml file.
# After the volume has been mounted, add the configs into traefik by using the `additionalArguments` list below, eg:
# additionalArguments:
# - "--providers.file.filename=/config/dynamic.toml"
volumes: []
# - name: public-cert
#   mountPath: "/certs"
#   type: secret
# - name: configs
#   mountPath: "/config"
#   type: configMap

# Logs
# https://docs.traefik.io/observability/logs/
logs:
  # Traefik logs concern everything that happens to Traefik itself (startup, configuration, events, shutdown, and so on).
  general:
    # By default, the logs use a text format (common), but you can
    # also ask for the json format in the format option
    # format: json
    # By default, the level is set to ERROR. Alternative logging levels are DEBUG, PANIC, FATAL, ERROR, WARN, and INFO.
    level: ERROR
  access:
    # To enable access logs
    enabled: true
    # By default, logs are written using the Common Log Format (CLF).
    # To write logs in JSON, use json in the format option.
    # If the given format is unsupported, the default (CLF) is used instead.
    # format: json
    # To write the logs in an asynchronous fashion, specify a bufferingSize option.
    # This option represents the number of log lines Traefik will keep in memory before writing
    # them to the selected output. In some cases, this option can greatly help performances.
    # bufferingSize: 100
    # Filtering https://docs.traefik.io/observability/access-logs/#filtering
    filters: {}
      # statuscodes: "200,300-302"
      # retryattempts: true
      # minduration: 10ms
    # Fields
    # https://docs.traefik.io/observability/access-logs/#limiting-the-fieldsincluding-headers
    fields:
      general:
        defaultmode: keep
        names: {}
          # Examples:
          # ClientUsername: drop
      headers:
        defaultmode: drop
        names: {}
          # Examples:
          # User-Agent: redact
          # Authorization: drop
          # Content-Type: keep

globalArguments:
  - "--global.checknewversion"
  - "--global.sendanonymoususage"

#
# Configure Traefik static configuration
# Additional arguments to be passed at Traefik's binary
# All available options available on https://docs.traefik.io/reference/static-configuration/cli/
## Use curly braces to pass values: `helm install --set="additionalArguments={--providers.kubernetesingress.ingressclass=traefik-internal,--log.level=DEBUG}"`
additionalArguments: []
#  - "--providers.kubernetesingress.ingressclass=traefik-internal"
#  - "--log.level=DEBUG"

# Environment variables to be passed to Traefik's binary
env: []
# - name: SOME_VAR
#   value: some-var-value
# - name: SOME_VAR_FROM_CONFIG_MAP
#   valueFrom:
#     configMapRef:
#       name: configmap-name
#       key: config-key
# - name: SOME_SECRET
#   valueFrom:
#     secretKeyRef:
#       name: secret-name
#       key: secret-key

envFrom: []
# - configMapRef:
#     name: config-map-name
# - secretRef:
#     name: secret-name

# Configure ports
ports:
  # The name of this one can't be changed as it is used for the readiness and
  # liveness probes, but you can adjust its config to your liking
  traefik:
    port: 9000
    # Use hostPort if set.
    # hostPort: 9000
    #
    # Use hostIP if set. If not set, Kubernetes will default to 0.0.0.0, which
    # means it's listening on all your interfaces and all your IPs. You may want
    # to set this value if you need traefik to listen on specific interface
    # only.
    # hostIP: 192.168.100.10

    # Defines whether the port is exposed if service.type is LoadBalancer or
    # NodePort.
    #
    # You SHOULD NOT expose the traefik port on production deployments.
    # If you want to access it from outside of your cluster,
    # use `kubectl proxy` or create a secure ingress
    expose: false
    # The exposed port for this service
    exposedPort: 9000
    # The port protocol (TCP/UDP)
    protocol: TCP
  web:
    port: 8000
    # hostPort: 8000
    expose: true
    exposedPort: 80
    # The port protocol (TCP/UDP)
    protocol: TCP
    # Use nodeport if set. This is useful if you have configured Traefik in a
    # LoadBalancer
    # nodePort: 32080
    # Port Redirections
    # Added in 2.2, you can make permanent redirects via entrypoints.
    # https://docs.traefik.io/routing/entrypoints/#redirection
    # redirectTo: websecure
  websecure:
    port: 8443
    # hostPort: 8443
    expose: true
    exposedPort: 443
    # The port protocol (TCP/UDP)
    protocol: TCP
    # nodePort: 32443

# Options for the main traefik service, where the entrypoints traffic comes
# from.
service:
  enabled: true
  type: LoadBalancer
  # Additional annotations (e.g. for cloud provider specific config)
  annotations: {}
  # Additional entries here will be added to the service spec. Cannot contains
  # type, selector or ports entries.
  spec: {}
    # externalTrafficPolicy: Cluster
    # loadBalancerIP: "1.2.3.4"
    # clusterIP: "2.3.4.5"
  loadBalancerSourceRanges: []
    # - 192.168.0.1/32
    # - 172.16.0.0/16
  externalIPs: []
    # - 1.2.3.4

## Create HorizontalPodAutoscaler object.
##
autoscaling:
  enabled: false
#   minReplicas: 1
#   maxReplicas: 10
#   metrics:
#   - type: Resource
#     resource:
#       name: cpu
#       targetAverageUtilization: 60
#   - type: Resource
#     resource:
#       name: memory
#       targetAverageUtilization: 60

# Enable persistence using Persistent Volume Claims
# ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
# After the pvc has been mounted, add the configs into traefik by using the `additionalArguments` list below, eg:
# additionalArguments:
# - "--certificatesresolvers.le.acme.storage=/data/acme.json"
        # Let's Encrypt Configurtion.
  - --certificatesresolvers.le.acme.email=letsencrypt@jotcode.com
  - --certificatesresolvers.le.acme.storage=acme.json
  - --certificatesresolvers.le.acme.tlschallenge
# It will persist TLS certificates.
persistence:
  enabled: false
#  existingClaim: ""
  accessMode: ReadWriteOnce
  size: 128Mi
  # storageClass: ""
  path: /data
  annotations: {}
  # subPath: "" # only mount a subpath of the Volume into the pod

# If hostNetwork is true, runs traefik in the host network namespace
# To prevent unschedulabel pods due to port collisions, if hostNetwork=true
# and replicas>1, a pod anti-affinity is recommended and will be set if the
# affinity is left as default.
hostNetwork: false

# Whether Role Based Access Control objects like roles and rolebindings should be created
rbac:
  enabled: true

  # If set to false, installs ClusterRole and ClusterRoleBinding so Traefik can be used across namespaces.
  # If set to true, installs namespace-specific Role and RoleBinding and requires provider configuration be set to that same namespace
  namespaced: false

# The service account the pods will use to interact with the Kubernetes API
serviceAccount:
  # If set, an existing service account is used
  # If not set, a service account is created automatically using the fullname template
  name: ""

# Additional serviceAccount annotations (e.g. for oidc authentication)
serviceAccountAnnotations: {}

resources: {}
  # requests:
  #   cpu: "100m"
  #   memory: "50Mi"
  # limits:
  #   cpu: "300m"
  #   memory: "150Mi"
affinity: {}
# # This example pod anti-affinity forces the scheduler to put traefik pods
# # on nodes where no other traefik pods are scheduled.
# # It should be used when hostNetwork: true to prevent port conflicts
#   podAntiAffinity:
#     requiredDuringSchedulingIgnoredDuringExecution:
#     - labelSelector:
#         matchExpressions:
#         - key: app
#           operator: In
#           values:
#           - {{ template "traefik.name" . }}
#       topologyKey: failure-domain.beta.kubernetes.io/zone
nodeSelector: {}
tolerations: []

# Pods can have priority.
# Priority indicates the importance of a Pod relative to other Pods.
priorityClassName: ""

# Set the container security context
# To run the container with ports below 1024 this will need to be adjust to run as root
securityContext:
  capabilities:
    drop: [ALL]
  readOnlyRootFilesystem: true
  runAsGroup: 65532
  runAsNonRoot: true
  runAsUser: 65532

podSecurityContext:
  fsGroup: 65532

I have a very similar issue (but with letsencrypt via cert-manager and getting a valid certificate). I am running k3s on a couple of raspberry pis.

I was writing a new topic when this one was suggested, I will put my original message down here:

I provisioned a new k3s cluster using k3sup
I tried both with the built-in traefik set up and with disabling that and installing the chart manually using

helm repo add traefik https://helm.traefik.io/traefik
helm repo update
helm install traefik traefik/traefik --values traefik/values.yaml --namespace kube-system

with values.yaml:

logs:
  general:
    level: INFO
  access:
    enabled: true
ingressClass:
  enabled: true
  isDefaultClass: true

and installed cert-manager, set it up and it is working (it provisions certificates)

When deploying a simple service that uses an Ingress (not IngressRoute since I wanted to get Ingress to work for various helm charts) that looks like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: echo
  template:
    metadata:
      labels:
        app.kubernetes.io/name: echo
    spec:
      containers:
        - name: echo
          image: ealen/echo-server
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: 1
              memory: 512M
            limits:
              cpu: 2
              memory: 1024M
---
apiVersion: v1
kind: Service
metadata:
  name: echo
spec:
  selector:
    app.kubernetes.io/name: echo
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-https
  annotations:
    kubernetes.io/ingress.class: traefik
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  rules:
  - host: echo-https.mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo
            port:
              number: 80
  tls:
  - hosts:
      - echo-https.mydomain.com
    secretName: echo-tls-certificate
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-http
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: echo-http.mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo
            port:
              number: 80

which gives me the following resources

❯ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
echo-5b8c8cd84d-4ct5l   1/1     Running   0          59s

❯ kubectl get deployments
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
echo   1/1     1            1           8m22s

❯ kubectl get services
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
echo         ClusterIP   10.43.75.57   <none>        80/TCP    8m34s
kubernetes   ClusterIP   10.43.0.1     <none>        443/TCP   22h

❯ kubectl get ingress
NAME         CLASS    HOSTS                      ADDRESS   PORTS     AGE
echo-http    <none>   echo-http.mydomain.com               80        3m6s
echo-https   <none>   echo-https.mydomain.com              80, 443   3m6s

❯ kubectl get certificaterequest
NAME                         APPROVED   DENIED   READY   ISSUER             REQUESTOR                                        AGE
echo-tls-certificate-hhnm4   True                True    letsencrypt-prod   system:serviceaccount:kube-system:cert-manager   7m29s

❯ kubectl get secret                 
NAME                   TYPE                DATA   AGE
echo-tls-certificate   kubernetes.io/tls   2      2m42s

for the http only service, when calling the end point using http:

❯ curl http://echo-http.mydomain.com 
{"http":{"method":"GET","baseUrl":"","originalUrl":"/","protocol":"http"}, <truncated>}

for the https service using http:

❯ curl http://echo-https.mydomain.com 
{"http":{"method":"GET","baseUrl":"","originalUrl":"/","protocol":"http"}, <truncated>}

for the https service using https:

❯ curl https://echo-https.mydomain.com
404 page not found

Looking at the dashboard the http service is healthy, it is exposed on the metrics, web, and websecure entrypoints, but TLS is not checked for either, but the TLS certificate is being served and is valid.

The access logs gives the following for the requests:

10.42.0.0 - - [11/Aug/2022:17:28:53 +0000] "GET / HTTP/1.1" 200 1262 "-" "-" 15880 "echo-http-default-echo-http-mydomain@kubernetes" "http://10.42.2.11:80" 10ms
10.42.2.1 - - [11/Aug/2022:17:39:10 +0000] "GET / HTTP/1.1" 200 1265 "-" "-" 16012 "echo-https-default-echo-https-mydomain@kubernetes" "http://10.42.2.11:80" 9ms
10.42.2.1 - - [11/Aug/2022:17:39:56 +0000] "GET / HTTP/2.0" - - "-" "-" 16023 "-" "-" 0ms

Did you ever solve the issues/does anyone have any idea what might be going wrong/what I can do to further investigate?

Update: I found the issue
Enabling tls, even without specifying a provider solves the issue and is disabled by default in the chart.
If you add the following to values.yaml (or whatever file you use for customization)

ports:
  websecure:
    tls:
      enabled: true

It works, hope this helps someone at some point