Deploying a Reflex app on with Traefik on k3s

I have an app written with the Reflex Python web development framework. For reference the reflex framework has HTTP traffic on container port 3000 and websockets traffic on port 8000 of the same container.

I am having trouble figuring out how to get both the HTTP traffic and the websocket traffic reachable from outside the cluster.

Steps to reproduce:

From an Ubuntu host:

Create a k3s server:


multipass launch 23.10 --name server-2
multipass stop server-2
multipass set local.server-2.memory=2G
multipass set local.server-2.cpus=2
multipass set local.server-2.disk=10G
multipass exec server-2 bash
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server --cluster-init --service-cidr 10.10.10.0/24
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.bashrc
echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.profile

# get server ip address

ip address

# Get server token

sudo cat /var/lib/rancher/k3s/server/node-token

# Install longhorn and Traefik service mesh
# Traefik ingres controller is already installed by default

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/prerequisite/longhorn-iscsi-installation.yaml
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/prerequisite/longhorn-nfs-installation.yaml
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml

sudo snap install helm --classic
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik-mesh traefik/traefik-mesh

Create a k3s agent (Node)

multipass launch 23.10 --name agent-2
multipass stop agent-2
multipass set local.agent-2.cpus=2
multipass set local.agent-2.memory=2G
multipass set local.agent-2.disk=10G
multipass start agent-2
multipass exec agent-2 bash
curl -sfL https://get.k3s.io | K3S_URL="https://[server vm IP]:6443" K3S_TOKEN="[token]" sh -

...

I use kubectl to create an image pull secret: [redacted]

I have the following configuration:

apiVersion: v1
kind: Namespace
metadata:
  name: cool-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cool-ui
  namespace: cool-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cool-ui
  template:
    metadata:
      labels:
        app: cool-ui
    spec:
      containers:
      - name: cool-ui
        image: [redacted]/[redacted]
        ports:
        - containerPort: 3000
        - containerPort: 8000
        volumeMounts:
        - name: cool-dash-data-vol
          mountPath: /home/jovian
      volumes:
      - name: cool-dash-data-vol
        persistentVolumeClaim:
          claimName: data-pvc
      imagePullSecrets:
      - name: my-pull-secret
---
apiVersion: v1
kind: Service
metadata:
  name: cool-ui-service-cip
  namespace: cool-app
spec:
  selector:
    app: cool-ui
  ports:
    - name: http
      port: 80
      targetPort: 3000
    - name: websocket
      port: 8000
      targetPort: 8000
  type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-pvc
  namespace: cool-app
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cool-ui-socet-ingress
  namespace: cool-app
  annotations:
    traefik.websocket.passthrough: "true"
    # I have tried with and without this ^
spec:
  rules:
    - host: [k3s server ip address].sslip.io
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: cool-ui-service-cip
                port:
                  number: 80

I get the web content at k3s server ip address].sslip.io:80 from the web browser on my local network, but the sockets on port 8000 are unreachable. I have tried numerous approaches to make this reachable from the web browser. Nothing seems to work. What I have tried:

  • Creating a LoadBalancer service (port 8000 -> container port 8000 ).
  • Create a second instance of the route / mapped to port 8000 of the same service.
[same ingress] +
...
          - path: /
            pathType: Prefix
            backend:
              service:
                name: cool-ui-service-cip
                port:
                  number: 8000
  • Creating two ingresses:
# First ingress the same
# second ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cool-ui-socet-ingress-b
  namespace: cool-app
  annotations:
spec:
  rules:
    - host: [k3s server ip address].sslip.io
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: cool-ui-service-cip
                port:
                  number: 8000

Anyone have any insight on what I could do to make this work?

Just some food for thought:

When bringing services to production, usually it’s best practice to have everything go through a single port 80/443.

Corporate firewalls might block your "special" port 8000 and your app won’t be functioning for some customers.

Websocket connections are often placed on a path (/ws) or you could also use a sub-domain, but for that you need to be able to set a "WebSocket base path" in your application, so it’s correctly communicated to the client.

I would think about that first, before trying to open port 8000 externally.

@bluepuma77

I really appreciate the feedback. I tried that this way:

apiVersion: v1
kind: Service
metadata:
  name: cool-ui-service-cip
  namespace: cool-app
spec:
  selector:
    app: cool-ui
  ports:
    - name: http
      port: 80
      targetPort: 3000
    - name: websocket
      port: 80
      targetPort: 8000
  type: ClusterIP

... with the same ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cool-ui-socet-ingress
  namespace: cool-app
  annotations:
    traefik.websocket.passthrough: "true"

spec:
  rules:
    - host: [k3s server IP].sslip.io
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: cool-ui-service-cip
                port:
                  number: 80

... and when I visit the page: http://[k3s server ip].sslip.io, it throws an error on the screen:

Connection Error

Cannot connect to server: timeout. Check if server is reachable at ws://[k3s server ip].sslip.io:8000/_event
  • Should I install a reverse proxy within the pod to merge the stream on port 3000 and 8000?
  • Would a second ClusterIP service mapped to port 80-> container port 8000 and a second ingress to port 80 on this service work? I definitely agree that the traffic should all be to the same port, but I'm left with what the framework does.

I know that in the Docker example they publish at: Self Hosting · Reflex Hosting Docs, they expose the 2 ports separately. I will create a mirror of this issue on Reflex as well, but I do hope I can find a solution to this. Reflex does accelerate the front end development and is great for AI services like ours where everything on the back - end in in Python or Python - C++ bindings. The way the traffic is on 2 ports is of course over my head on the network engineering.

For reference: (Discussion on the same issue on Reflex's blog) Discord