Traefik 38.0.1 - Argocd helm dropdown issue

Hello, I'm trying to migrate from ingress-nginx to Traefik and started with some infra repos.
Recently, everything was working fine until the latest Traefik Helm chart upgrade.
The thing is, with the latest Traefik helm-chart argo-cd dropdown helm versions are missing. I have another cluster with ingress-nginx instead of Traefik, and everything works correctly there.
Please check a screenshot
After I downgrade to 37.4.0, it works again. Please check the screenshot to understand what I'm talking about

Image

I saw some breaking changes in 38.0.1, but can you suggest a workaround or something?

Values

traefik argo app:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: traefik-internal
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: infrastructure
  source:
    chart: traefik
    repoURL: https://traefik.github.io/charts
    targetRevision: 38.0.1
    helm:
      releaseName: traefik-internal
      values: |
        providers:
          kubernetesIngress:
            enabled: true
            allowExternalNameServices: true
            publishedService:
              enabled: true
          kubernetesCRD:
            enabled: true
            allowExternalNameServices: true
            allowCrossNamespace: true

        ingressClass:
          enabled: true
          isDefaultClass: false
          name: traefik-internal

        ingressRoute:
          dashboard:
            enabled: false

        globalArguments:
          - "--global.checknewversion=false"
          - "--global.sendanonymoususage=false"

        additionalArguments:
          - "--log.level=INFO"
          - "--accesslog=true"
          - "--accesslog.format=json"
          - "--accesslog.filters.statuscodes=400-599"
          - "--metrics.prometheus=true"
          - "--metrics.prometheus.entrypoint=metrics"
          - "--ping"
          - "--ping.entryPoint=web"

        ports:
          web:
            port: 8000
            exposedPort: 80
            protocol: TCP
            redirections:
              entryPoint:
                to: websecure
                scheme: https
                permanent: true
          websecure:
            port: 8443
            exposedPort: 443
            protocol: TCP
            tls:
              enabled: false
          metrics:
            port: 9100
            protocol: TCP

        deployment:
          enabled: true
          kind: Deployment
          replicas: 1
          podAnnotations:
            prometheus.io/scrape: "true"
            prometheus.io/port: "9100"
            prometheus.io/path: "/metrics"
          healthchecksPort: 8000
          healthchecksScheme: HTTP

        logs:
          general:
            level: INFO
          access:
            enabled: true
            format: json

        service:
          enabled: true
          type: LoadBalancer
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-name: "prod-traefik-internal-services"
            service.beta.kubernetes.io/aws-load-balancer-type: "external"
            service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
            service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
            service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:eu-central-1:ACCOUNT_ID:certificate/CERT"
            service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
            service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
            service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "http"
            service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: "/ping"
            service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8000"
            service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
            service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
            service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
            service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
            service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
          loadBalancerSourceRanges:
            - "172.16.0.0/21"
            - "172.16.8.0/21"
          spec:
            externalTrafficPolicy: Local

        nodeSelector:
          ingress: "ingress-nginx"

        tolerations:
          - key: "ingress-nginx"
            operator: "Exists"
            effect: "NoSchedule"

        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 100
                podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                      - key: app.kubernetes.io/name
                        operator: In
                        values:
                          - traefik
                  topologyKey: "kubernetes.io/hostname"

        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "500m"
            memory: "512Mi"

        rbac:
          enabled: true

        podDisruptionBudget:
          enabled: true
          minAvailable: 1

  destination:
    server: https://kubernetes.default.svc
    namespace: traefik

  syncPolicy:
    automated:
      prune: false
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

My Argo Ingress Section:

          ingress:
            enabled: true
            ingressClassName: traefik-internal
            annotations:
              traefik.ingress.kubernetes.io/router.entrypoints: websecure

I tried to add to the argo ingress annotations:

              traefik.ingress.kubernetes.io/service.serversscheme: h2c
              traefik.ingress.kubernetes.io/proxy.buffering: "off"
              traefik.ingress.kubernetes.io/proxy.request.buffering: "off"
              traefik.ingress.kubernetes.io/proxy.readtimeout: "3600"
              traefik.ingress.kubernetes.io/proxy.sendtimeout: "3600"

But it didn't help.