NetworkPolicy adjustment for k3s and Traefik

We are running a Helm App on Rancher through Kubernetes since a long time and always used the following NetworkPolicy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: nwpolicy
  namespace: {{ .Release.Namespace }}
spec:
  podSelector: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: {{ .Release.Namespace }}
    - ipBlock:
        cidr: 10.42.2.0/32
    - ipBlock:
        cidr: 10.42.1.0/32
  policyTypes:
    - Ingress

This allows communication from certain nodes (seen in the ipBlock parts) and the App's own namespace's objects (seen in the namespaceSelector part) between each other. The purpose of this NetworkPolicy is that only what belongs to this namespace is supposed to be able to talk to other object in this namespace. Objects from other namespaces should never be allowed to communicate with objects of this specific namespace.

So, usually nginx is the default ingress that is used within Rancher, which the NetworkPolicy always worked with and still works, right in this moment.
K3s on the other hand, seems to use Traefik by default. That blocks the previously allowed traffic from arriving on the destination pods in the example namespace referenced above. It is unclear what exactly causes the difference in behaviour.

How does the NetworkPolicy need to be adjusted to make this work?

P.S.: Is this related?

Hi Akito,

Could you please share some examples and configuration required to replicate the behavior you're seeing? Also, debug logs from Traefik may help the community pinpoint the issue you're having.

Warm regards,
_Kc

Greetings @notsureifkevin,

Thanks for the response. Could you tell me where exactly I would need to look for that? The thing is, that I did not choose to use Traefik, but k3s does it for me, entirely on its own. So whatever k3s chooses to configure in which way as the default, is probably the configuration I have, as I did not customize a single bit about Traefik. Nothing.

Are the logs in the Traefik deployment(s) the debug logs or can I find them somewhere else?

By default it's likely not deployed with debug logging on. I'm not familiar enough with K3s at the moment to provide insight on how to configure the configuration parameters of Traefik, I recommend checking out the documentation for K3s to see if they allow for manipulating the configuration of Traefik. If they do, the parameter --log.level=DEBUG in the command line parameters will configure debug logging, and then you'll need to create a log dump by using kubectl logs.