We use Traefik Ingress Class + cert-manager to expose our services that use the same domain but different http url path. I disabled reflector (secret syncing) to "reduce the noize". So TLS Certificate is created for every ingress (we have 1 per namespace)
When I access my service, I noticed a weird behaviour wrt to what TLS Certificates are being served when accessing the services. cert-manager generates valid expected certificates that live in namespace with ingress resource but this certificate is not being used. Instead I see a certificate which looks like stems from a different namespace (where we also have an ingress and tls certificates). Both resolve / use the same domain.
To be more precise: we configure STAGING TLS Certifiacate for Portainer Service but when I access portainer I see a PROD TLS Certificate with dates corresponding to certificate in adminer namespace of Adminer Service (we expose adminer in exactly the same way but different http url path). These 2 service live in their own namespaces. If I delete adminer namespace, I start seeing proper STAG certificate from portainer namespace being used when access portainer service
Any clues what is going on here?
NOTE: both ingress use the same domain (the only difference is HTTP Path, hence /portainer and /adminer
$ kubectl get cert -A | grep monitoring-tls
adminer monitoring-tls True monitoring-tls 18h # generated with PROD ACME Server
portainer monitoring-tls True monitoring-tls 84s # generated with STAGING ACME Server
Portainer cetificate: staging acme server + dates
$ kubectl -n portainer get secret monitoring-tls -o jsonpath="{.data['tls\.crt']}" | base64 -d | openssl x509 -text -noout | head
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
...
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, O = (STAGING) Let's Encrypt, CN = (STAGING) Tenuous Tomato R13
Validity
Not Before: Oct 2 06:52:14 2025 GMT
Not After : Dec 31 06:52:13 2025 GMT
...
PRESUMABLY Certificate being shown when accessing portainer (based on Not Before / Not After dates). This is from adminer namespace
$ kubectl -n adminer get secret monitoring-tls -o jsonpath="{.data['tls\.crt']}" | base64 -d | openssl x509 -text -noout | head
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
...
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, O = Let's Encrypt, CN = R12
Validity
Not Before: Oct 1 13:21:52 2025 GMT
Not After : Dec 30 13:21:51 2025 GMT
...
kubectl -n portainer get ingress -o yaml
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: cert-issuer
meta.helm.sh/release-name: portainer
meta.helm.sh/release-namespace: portainer
namespace: portainer
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-traefik-basic-auth@kubernetescrd,traefik-portainer-strip-prefix@kubernetescrd
creationTimestamp: "2025-10-02T07:41:12Z"
generation: 4
labels:
app.kubernetes.io/instance: portainer
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: portainer
app.kubernetes.io/version: ce-latest-ee-2.21.2
helm.sh/chart: portainer-1.0.54
name: portainer
namespace: portainer
resourceVersion: "76648564"
uid: 98e844fa-e311-...
spec:
ingressClassName: traefik
rules:
- host: ...
http:
paths:
- backend:
service:
name: portainer
port:
number: 9000
path: /portainer
pathType: Prefix
tls:
- hosts:
- ...
secretName: monitoring-tls
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
$ kubectl -n adminer get ingress -o yaml
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: cert-issuer
meta.helm.sh/release-name: adminer
meta.helm.sh/release-namespace: adminer
namespace: adminer
traefik.ingress.kubernetes.io/router.entrypoints: websecure
creationTimestamp: "2025-10-01T14:20:20Z"
generation: 1
labels:
app.kubernetes.io/instance: adminer
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: adminer
app.kubernetes.io/version: 4.8.1
helm.sh/chart: adminer-0.0.1
name: adminer
namespace: adminer
resourceVersion: "76492834"
uid: 26166a8f-3efc-...
spec:
ingressClassName: traefik
rules:
- host: ...
http:
paths:
- backend:
service:
name: adminer
port:
number: 8080
path: /adminer/simcore
pathType: Exact
tls:
- hosts:
- ...
secretName: monitoring-tls
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
Ingress definition of Portainer (helm values)
ingress:
enabled: true
className: "" # traefik ingress by default
annotations:
namespace: {{ .Release.Namespace }}
cert-manager.io/cluster-issuer: "cert-issuer"
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-traefik-basic-auth@kubernetescrd,traefik-portainer-strip-prefix@kubernetescrd # namespace + middleware name
tls:
- hosts:
- {{ requiredEnv "K8S_MONITORING_FQDN" }}
secretName: monitoring-tls
hosts:
- host: {{ requiredEnv "K8S_MONITORING_FQDN" }}
paths:
- path: /portainer
pathType: Prefix
backend:
service:
name: portainer
port:
number: *servicePort
Ingress definition of adminer (helm values)
ingress:
enabled: true
className: ""
annotations:
namespace: {{ .Release.Namespace }}
cert-manager.io/cluster-issuer: "cert-issuer"
traefik.ingress.kubernetes.io/router.entrypoints: websecure
tls:
- hosts:
- {{ requiredEnv "K8S_MONITORING_FQDN" }}
secretName: monitoring-tls
hosts:
- host: {{ requiredEnv "K8S_MONITORING_FQDN" }}
paths:
- path: /adminer/simcore
pathType: Exact
backend:
service:
name: adminer
traefik helm values:
additionalArguments:
- "--api.insecure=true"
deployment:
kind: DaemonSet
ingressRoute:
dashboard:
enabled: false
logs:
general:
level: DEBUG
access:
enabled: true
service:
type: NodePort
ports:
web:
nodePort: 32080
websecure:
nodePort: 32443
nodeSelector:
node-role.kubernetes.io/control-plane: ""
traefik helm chart version
releases:
- name: traefik
namespace: traefik
chart: traefik/traefik
createNamespace: true
version: 32.0.0
values:
- ./traefik/values.common.yaml.gotmpl
- ./traefik/values.secure.yaml.gotmpl
- ./traefik/values.webinternal.yaml.gotmpl
...
$ kubectl version
Client Version: v1.28.6
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.6