Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected ports allowed with egress rules using ipBlock and except #180

Open
mw-tlhakhan opened this issue Jan 5, 2024 · 12 comments
Open
Labels
help wanted Extra attention is needed

Comments

@mw-tlhakhan
Copy link

What happened:
I have network policy with three egress rules. The Kubernetes ServiceCIDR is 10.254.0.0/16.

  1. It allows access to CoreDNS.
  2. It allows access to port 8080 with a destination of 10.254.0.0/16.
  3. It allows access to the any address (0.0.0.0/0), except 10.254.0.0/16.

When I apply the below network policy. I am able to access to any port with destination of 10.254.0.0/16, when it should only be 8080.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-internet-and-svc-8080
spec:
  egress:
  - ports:
    - port: 53
      protocol: UDP
    to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns

  - ports:
    - port: 8080
      protocol: TCP
    to:
    - ipBlock:
        cidr: 10.254.0.0/16

  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.254.0.0/16

  podSelector:
    matchLabels:
      app: client-one
  policyTypes:
  - Egress

Attach logs

evel":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"Total L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:673","msg":"Updating Map with ","IP Key:":"10.0.68.60/32"}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":254,"startPort: ":53,"endPort: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:673","msg":"Updating Map with ","IP Key:":"10.0.68.240/32"}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":254,"startPort: ":53,"endPort: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:673","msg":"Updating Map with ","IP Key:":"10.254.0.10/32"}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":254,"startPort: ":53,"endPort: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:673","msg":"Parsed Except CIDR","IP Key: ":"10.254.0.0/16"}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":255,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:673","msg":"Updating Map with ","IP Key:":"10.254.0.0/16"}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-05T15:29:18.011Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:701","msg":"L4 values: ","protocol: ":254,"startPort: ":8080,"endPort: ":0}

From the logs see the relevant snippet below, I believe the policy agent is doing some port merge and doesn't take into account that the IP block from earlier was from an except clause.

"msg":"Parsed Except CIDR","IP Key: ":"10.254.0.0/16"}
"msg":"L4 values: ","protocol: ":255,"startPort: ":0,"endPort: ":0}

"msg":"Updating Map with ","IP Key:":"10.254.0.0/16"}
"msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0} <== NOT EXPECTED, all ports??
"msg":"L4 values: ","protocol: ":254,"startPort: ":8080,"endPort: ":0}

What you expected to happen:
I expect the client-one pods to access port 8080 with destination IP of 10.254.0.0/16, but not to any other ports like 9090 or others.

How to reproduce it (as minimally and precisely as possible):

Here is a demo-app service/deployment used to reproduce the issue.

---
apiVersion: v1
kind: Service
metadata:
  name: demo-app
spec:
  type: ClusterIP
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: demo-app
...
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: demo-app-index
data:
  index.html: |
    <!DOCTYPE html>
    <html>
      <head>
        <title>Welcome to Amazon EKS!</title>
        <style>
            html {color-scheme: light dark;}
            body {width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif;}
        </style>
      </head>
      <body>
        <h1>Welcome to Amazon EKS!</h1>
        <p>If you see this page, you are able successfully access the web application as the network policy allows.</p>
        <p>For online documentation and installation instructions please refer to
          <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-networking.html">Amazon EKS Networking</a>.<br/><br/>
          The migration guides are available at
          <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-networking.html">Amazon EKS Network Policy Migration</a>.
        </p>
        <p><em>Thank you for using Amazon EKS.</em></p>
    </body>
    </html>
...
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-app
spec:
  selector:
    matchLabels:
      app: demo-app
  replicas: 1
  template:
    metadata:
      labels:
        app: demo-app
    spec:
      containers:
      - name: demo
        image: public.ecr.aws/docker/library/nginx:stable
        imagePullPolicy: IfNotPresent
        command: ["/bin/sh", "-c"]
        args: ["sed -i 's/listen[[:space:]]*80;/listen 8080;/g' /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"]
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nginx-index-volume

      volumes:
      - name: nginx-index-volume
        configMap:
          name: demo-app-index
...

Here is the client-one deployment used to test the network policy. Please note, the memory resource request and adjust as needed. I bumped this up so that the clients are spread across more nodes and I'm not doing a "single" node testing, which might not be representative of node-node networking nuances.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: client-one
  labels:
    app: client-one
spec:
  replicas: 10
  selector:
    matchLabels:
      app: client-one
  template:
    metadata:
      labels:
        app: client-one
    spec:
      terminationGracePeriodSeconds: 2
      containers:
      - name: client-one
        image: curlimages/curl:latest
        command: ["/bin/sh", "-c"]
        args:
          - |
            sleep infinity
        resources:
          requests:
            memory: "1200Mi"
...

Here is my initial state, I've deployed the clients, the demo-app and my network policy. I list my in-cluster services, demo-app running in default namespace and prometheus running in the monitoring namespace.

# k get pods
NAME                          READY   STATUS    RESTARTS   AGE
client-one-655ffdf468-57ncr   1/1     Running   0          41m
client-one-655ffdf468-5x6df   1/1     Running   0          41m
client-one-655ffdf468-7tv2j   1/1     Running   0          41m
client-one-655ffdf468-8cf65   1/1     Running   0          41m
client-one-655ffdf468-9ckx4   1/1     Running   0          41m
client-one-655ffdf468-cz8rw   1/1     Running   0          41m
client-one-655ffdf468-fq9rx   1/1     Running   0          41m
client-one-655ffdf468-gzt9t   1/1     Running   0          41m
client-one-655ffdf468-jtpt7   1/1     Running   0          41m
client-one-655ffdf468-n24mq   1/1     Running   0          41m
demo-app-649c8db494-zrpp5     1/1     Running   0          42m

# k get netpol
NAME                          POD-SELECTOR     AGE
allow-internet-and-svc-8080   app=client-one   15m

# k get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
demo-app     ClusterIP   10.254.211.72   <none>        8080/TCP   43m
kubernetes   ClusterIP   10.254.0.1      <none>        443/TCP    18h

# k get svc -n monitoring prometheus-kube-prometheus-prometheus
NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
prometheus-kube-prometheus-prometheus   ClusterIP   10.254.163.46   <none>        9090/TCP   18h

Here is my result of the network policy testing. I get all the client-one pods and then exec into it and run a simple curl command to query its UI and print its status code.

I am able to access the demo-app.default service:

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L demo-app.default:8080'
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200

I am also able to access my prometheus UI service:

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L prometheus-kube-prometheus-prometheus.monitoring:9090'
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200

Anything else we need to know?:
Is the network policy even working? Yes, it is. I can show the before and after addition to the current network policy. I will add 169.254.0.0/16 to the except list, so that the pod won't be able to access the EC2 metadata endpoint.

Current egress rule:

   - to:
     - ipBlock:
         cidr: 0.0.0.0/0
         except:
         - 10.254.0.0/16

Result:

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L http://169.254.169.254/latest/meta-data/'
http://169.254.169.254/latest/meta-data/ 200
http://169.254.169.254/latest/meta-data/ 200
http://169.254.169.254/latest/meta-data/ 200
http://169.254.169.254/latest/meta-data/ 200
http://169.254.169.254/latest/meta-data/ 200
http://169.254.169.254/latest/meta-data/ 200
http://169.254.169.254/latest/meta-data/ 200
http://169.254.169.254/latest/meta-data/ 200
http://169.254.169.254/latest/meta-data/ 200
http://169.254.169.254/latest/meta-data/ 200

Adding the 169.254.0.0/16 block.

  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.254.0.0/16
        - 169.254.0.0/16

Result:

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L http://169.254.169.254/latest/meta-data/'
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
http://169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28

Summary:
Yes the network policy agent is enforcing the rules, and there is some sort of port merge issue between two rules and using an except field.

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: v1.25.15-eks-e71965b
    Kustomize Version: v4.5.7
    Server Version: v1.25.16-eks-8cb36c9

  • CNI Version: amazon-k8s-cni:v1.15.3-eksbuild.1

  • Network Policy Agent Version: aws-network-policy-agent:v1.0.5-eksbuild.1

  • OS (e.g: cat /etc/os-release): Ubuntu 20.04.6 LTS

  • Kernel (e.g. uname -a): 5.15.0-1051-aws

@mw-tlhakhan mw-tlhakhan added the bug Something isn't working label Jan 5, 2024
@jdn5126
Copy link
Contributor

jdn5126 commented Jan 5, 2024

@mw-tlhakhan I have not gotten a chance to dig deeper, but since you can reproduce this, have you tried using Network Policy agent image v1.0.7? I am wondering if this was fixed in the recent batch of bug fixes.

@mw-tlhakhan
Copy link
Author

Hi @jdn5126, I updated the EKS add-on from v1.15.3-eksbuild.1 to v1.15.5-eksbuild.1. The latest v1.15.5 add-on bundles the v1.0.7 of the network policy agent.

Short answer, no change in behavior observed.

Verification output below

  • aws-node pods are running the updated images, with v1.0.7 of the aws-network-policy-agent image.
# k describe po -n kube-system -l app.kubernetes.io/instance=aws-vpc-cni | grep -A2 "aws-node:" | grep -e aws-node -e Image
  aws-node:
    Image:          602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni:v1.15.5-eksbuild.1
  aws-node:
    Image:          602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni:v1.15.5-eksbuild.1
  aws-node:
    Image:          602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni:v1.15.5-eksbuild.1

# k describe po -n kube-system -l app.kubernetes.io/instance=aws-vpc-cni | grep -A10 'aws-eks-nodeagent:' |grep -e aws-eks-nodeagent -e "Image:" -e enable-network-policy -e Args
  aws-eks-nodeagent:
    Image:         602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-network-policy-agent:v1.0.7-eksbuild.1
    Args:
      --enable-network-policy=true
  aws-eks-nodeagent:
    Image:         602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-network-policy-agent:v1.0.7-eksbuild.1
    Args:
      --enable-network-policy=true
  aws-eks-nodeagent:
    Image:         602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-network-policy-agent:v1.0.7-eksbuild.1
    Args:
      --enable-network-policy=true

Log outputs

  • I deleted and re-created the network policy, and captured one of the node's policy agent logs.
# k get netpol
NAME                          POD-SELECTOR     AGE
allow-internet-and-svc-8080   app=client-one   4s

# k describe netpol
Name:         allow-internet-and-svc-8080
Namespace:    default
Created on:   2024-01-05 22:08:26 -0500 EST
Labels:       <none>
Annotations:  <none>
Spec:
  PodSelector:     app=client-one
  Not affecting ingress traffic
  Allowing egress traffic:
    To Port: 53/UDP
    To:
      NamespaceSelector: kubernetes.io/metadata.name=kube-system
      PodSelector: k8s-app=kube-dns
    ----------
    To Port: 8080/TCP
    To:
      IPBlock:
        CIDR: 10.254.0.0/16
        Except:
    ----------
    To Port: <any> (traffic allowed to all ports)
    To:
      IPBlock:
        CIDR: 0.0.0.0/0
        Except: 10.254.0.0/16, 169.254.0.0/16
  Policy Types: Egress

# cat network-policy-agent.log
"level":"info","ts":"2024-01-06T03:08:26.615Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"Current L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.615Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"Current L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.615Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"Total L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.615Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.615Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:672","msg":"Updating Map with ","IP Key:":"10.254.0.10/32"}
{"level":"info","ts":"2024-01-06T03:08:26.615Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":254,"startPort: ":53,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:672","msg":"Parsed Except CIDR","IP Key: ":"10.254.0.0/16"}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":255,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:672","msg":"Parsed Except CIDR","IP Key: ":"169.254.0.0/16"}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":255,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:672","msg":"Updating Map with ","IP Key:":"10.0.68.60/32"}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":254,"startPort: ":53,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:672","msg":"Updating Map with ","IP Key:":"10.0.68.240/32"}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":254,"startPort: ":53,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:672","msg":"Updating Map with ","IP Key:":"10.254.0.0/16"}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:700","msg":"L4 values: ","protocol: ":254,"startPort: ":8080,"endPort: ":0}
{"level":"info","ts":"2024-01-06T03:08:26.616Z","logger":"ebpf-client","caller":"controllers/policyendpoints_controller.go:421","msg":"ID of map to update: ","ID: ":14}

Test result

  • I ran the curl commands to test the network policy.
  • The 200 OK to demo-app service on port 8080 is expected.
  • The 200 OK to prometheus service on port 9090 is not expected.
  • The timeout to EC2 metadata is expected. I left the 169.254.0.0/16 rule to have a successful deny policy in place.
# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L demo-app.default:8080'
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L prometheus-kube-prometheus-prometheus.monitoring:9090'
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200
prometheus-kube-prometheus-prometheus.monitoring:9090 200

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L 169.254.169.254/latest/meta-data/'
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28

@jayanthvn
Copy link
Contributor

@mw-tlhakhan - Thanks for the detailed explanation. There is no change in this part of the code in 1.0.7 or main so it is expected. I will look into this and get back.

@jayanthvn
Copy link
Contributor

I did repro it -

{"msg":"Parsed Except CIDR","IP Key: ":"10.100.0.0/16"}
{"msg":"L4 values: ","protocol: ":255,"startPort: ":0,"endPort: ":0} -> reserved value (255) was derived correctly

{"msg":"Updating Map with ","IP Key:":"10.100.0.0/16"}
{"msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0} -> this should have been 255

{"msg":"L4 values: ","protocol: ":254,"startPort: ":8080,"endPort: ":0}

@jayanthvn
Copy link
Contributor

We are evaluating to see if this is a bug. There are conflicting rules for the same CIDR. One rule is blocking -

- to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.254.0.0/16

While another rule is allowing for a particular port -

- ports:
    - port: 8080
      protocol: TCP
    to:
    - ipBlock:
        cidr: 10.254.0.0/16

But since there is an allow all with /m for all port and protocol we go ahead and inherit and in this case from 0.0.0.0/0..except is meant to selectively block certain cidrs while allowing the rest.

@mw-tlhakhan
Copy link
Author

The network policy usecase I have is like such:

  • I want to allow pods the ability to access anything on the Internet (0.0.0.0/0), but I want to limit access to internal networks or services inside of the VPC.

Is there a concise way to declare this as a network policy?

The below long form doesn't have conflicting rules and does work but is quite long.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-internet-and-svc-8080
spec:
  egress:
  - ports:
    - port: 53
      protocol: UDP
    to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns

  - ports:
    - port: 8080
      protocol: TCP
    to:
    - ipBlock:
        cidr: 10.0.0.0/8

  # exclude 10.0.0.0/8 and 169.0.0.0/8
  - to:
    - ipBlock:
        cidr: 1.0.0.0/8
    - ipBlock:
        cidr: 2.0.0.0/8
    - ipBlock:
        cidr: 3.0.0.0/8
    - ipBlock:
        cidr: 4.0.0.0/8
    - ipBlock:
        cidr: 5.0.0.0/8
    - ipBlock:
        cidr: 6.0.0.0/8
    - ipBlock:
        cidr: 7.0.0.0/8
    - ipBlock:
        cidr: 8.0.0.0/8
    - ipBlock:
        cidr: 9.0.0.0/8
    - ipBlock:
        cidr: 11.0.0.0/8
    - ipBlock:
        cidr: 12.0.0.0/8
    - ipBlock:
        cidr: 13.0.0.0/8
    - ipBlock:
        cidr: 14.0.0.0/8
    - ipBlock:
        cidr: 15.0.0.0/8
    - ipBlock:
        cidr: 16.0.0.0/8
    - ipBlock:
        cidr: 17.0.0.0/8
    - ipBlock:
        cidr: 18.0.0.0/8
    - ipBlock:
        cidr: 19.0.0.0/8
    - ipBlock:
        cidr: 20.0.0.0/8
    - ipBlock:
        cidr: 21.0.0.0/8
    - ipBlock:
        cidr: 22.0.0.0/8
    - ipBlock:
        cidr: 23.0.0.0/8
    - ipBlock:
        cidr: 24.0.0.0/8
    - ipBlock:
        cidr: 25.0.0.0/8
    - ipBlock:
        cidr: 26.0.0.0/8
    - ipBlock:
        cidr: 27.0.0.0/8
    - ipBlock:
        cidr: 28.0.0.0/8
    - ipBlock:
        cidr: 29.0.0.0/8
    - ipBlock:
        cidr: 30.0.0.0/8
    - ipBlock:
        cidr: 31.0.0.0/8
    - ipBlock:
        cidr: 32.0.0.0/8
    - ipBlock:
        cidr: 33.0.0.0/8
    - ipBlock:
        cidr: 34.0.0.0/8
    - ipBlock:
        cidr: 35.0.0.0/8
    - ipBlock:
        cidr: 36.0.0.0/8
    - ipBlock:
        cidr: 37.0.0.0/8
    - ipBlock:
        cidr: 38.0.0.0/8
    - ipBlock:
        cidr: 39.0.0.0/8
    - ipBlock:
        cidr: 40.0.0.0/8
    - ipBlock:
        cidr: 41.0.0.0/8
    - ipBlock:
        cidr: 42.0.0.0/8
    - ipBlock:
        cidr: 43.0.0.0/8
    - ipBlock:
        cidr: 44.0.0.0/8
    - ipBlock:
        cidr: 45.0.0.0/8
    - ipBlock:
        cidr: 46.0.0.0/8
    - ipBlock:
        cidr: 47.0.0.0/8
    - ipBlock:
        cidr: 48.0.0.0/8
    - ipBlock:
        cidr: 49.0.0.0/8
    - ipBlock:
        cidr: 50.0.0.0/8
    - ipBlock:
        cidr: 51.0.0.0/8
    - ipBlock:
        cidr: 52.0.0.0/8
    - ipBlock:
        cidr: 53.0.0.0/8
    - ipBlock:
        cidr: 54.0.0.0/8
    - ipBlock:
        cidr: 55.0.0.0/8
    - ipBlock:
        cidr: 56.0.0.0/8
    - ipBlock:
        cidr: 57.0.0.0/8
    - ipBlock:
        cidr: 58.0.0.0/8
    - ipBlock:
        cidr: 59.0.0.0/8
    - ipBlock:
        cidr: 60.0.0.0/8
    - ipBlock:
        cidr: 61.0.0.0/8
    - ipBlock:
        cidr: 62.0.0.0/8
    - ipBlock:
        cidr: 63.0.0.0/8
    - ipBlock:
        cidr: 64.0.0.0/8
    - ipBlock:
        cidr: 65.0.0.0/8
    - ipBlock:
        cidr: 66.0.0.0/8
    - ipBlock:
        cidr: 67.0.0.0/8
    - ipBlock:
        cidr: 68.0.0.0/8
    - ipBlock:
        cidr: 69.0.0.0/8
    - ipBlock:
        cidr: 70.0.0.0/8
    - ipBlock:
        cidr: 71.0.0.0/8
    - ipBlock:
        cidr: 72.0.0.0/8
    - ipBlock:
        cidr: 73.0.0.0/8
    - ipBlock:
        cidr: 74.0.0.0/8
    - ipBlock:
        cidr: 75.0.0.0/8
    - ipBlock:
        cidr: 76.0.0.0/8
    - ipBlock:
        cidr: 77.0.0.0/8
    - ipBlock:
        cidr: 78.0.0.0/8
    - ipBlock:
        cidr: 79.0.0.0/8
    - ipBlock:
        cidr: 80.0.0.0/8
    - ipBlock:
        cidr: 81.0.0.0/8
    - ipBlock:
        cidr: 82.0.0.0/8
    - ipBlock:
        cidr: 83.0.0.0/8
    - ipBlock:
        cidr: 84.0.0.0/8
    - ipBlock:
        cidr: 85.0.0.0/8
    - ipBlock:
        cidr: 86.0.0.0/8
    - ipBlock:
        cidr: 87.0.0.0/8
    - ipBlock:
        cidr: 88.0.0.0/8
    - ipBlock:
        cidr: 89.0.0.0/8
    - ipBlock:
        cidr: 90.0.0.0/8
    - ipBlock:
        cidr: 91.0.0.0/8
    - ipBlock:
        cidr: 92.0.0.0/8
    - ipBlock:
        cidr: 93.0.0.0/8
    - ipBlock:
        cidr: 94.0.0.0/8
    - ipBlock:
        cidr: 95.0.0.0/8
    - ipBlock:
        cidr: 96.0.0.0/8
    - ipBlock:
        cidr: 97.0.0.0/8
    - ipBlock:
        cidr: 98.0.0.0/8
    - ipBlock:
        cidr: 99.0.0.0/8
    - ipBlock:
        cidr: 100.0.0.0/8
    - ipBlock:
        cidr: 101.0.0.0/8
    - ipBlock:
        cidr: 102.0.0.0/8
    - ipBlock:
        cidr: 103.0.0.0/8
    - ipBlock:
        cidr: 104.0.0.0/8
    - ipBlock:
        cidr: 105.0.0.0/8
    - ipBlock:
        cidr: 106.0.0.0/8
    - ipBlock:
        cidr: 107.0.0.0/8
    - ipBlock:
        cidr: 108.0.0.0/8
    - ipBlock:
        cidr: 109.0.0.0/8
    - ipBlock:
        cidr: 110.0.0.0/8
    - ipBlock:
        cidr: 111.0.0.0/8
    - ipBlock:
        cidr: 112.0.0.0/8
    - ipBlock:
        cidr: 113.0.0.0/8
    - ipBlock:
        cidr: 114.0.0.0/8
    - ipBlock:
        cidr: 115.0.0.0/8
    - ipBlock:
        cidr: 116.0.0.0/8
    - ipBlock:
        cidr: 117.0.0.0/8
    - ipBlock:
        cidr: 118.0.0.0/8
    - ipBlock:
        cidr: 119.0.0.0/8
    - ipBlock:
        cidr: 120.0.0.0/8
    - ipBlock:
        cidr: 121.0.0.0/8
    - ipBlock:
        cidr: 122.0.0.0/8
    - ipBlock:
        cidr: 123.0.0.0/8
    - ipBlock:
        cidr: 124.0.0.0/8
    - ipBlock:
        cidr: 125.0.0.0/8
    - ipBlock:
        cidr: 126.0.0.0/8
    - ipBlock:
        cidr: 127.0.0.0/8
    - ipBlock:
        cidr: 128.0.0.0/8
    - ipBlock:
        cidr: 129.0.0.0/8
    - ipBlock:
        cidr: 130.0.0.0/8
    - ipBlock:
        cidr: 131.0.0.0/8
    - ipBlock:
        cidr: 132.0.0.0/8
    - ipBlock:
        cidr: 133.0.0.0/8
    - ipBlock:
        cidr: 134.0.0.0/8
    - ipBlock:
        cidr: 135.0.0.0/8
    - ipBlock:
        cidr: 136.0.0.0/8
    - ipBlock:
        cidr: 137.0.0.0/8
    - ipBlock:
        cidr: 138.0.0.0/8
    - ipBlock:
        cidr: 139.0.0.0/8
    - ipBlock:
        cidr: 140.0.0.0/8
    - ipBlock:
        cidr: 141.0.0.0/8
    - ipBlock:
        cidr: 142.0.0.0/8
    - ipBlock:
        cidr: 143.0.0.0/8
    - ipBlock:
        cidr: 144.0.0.0/8
    - ipBlock:
        cidr: 145.0.0.0/8
    - ipBlock:
        cidr: 146.0.0.0/8
    - ipBlock:
        cidr: 147.0.0.0/8
    - ipBlock:
        cidr: 148.0.0.0/8
    - ipBlock:
        cidr: 149.0.0.0/8
    - ipBlock:
        cidr: 150.0.0.0/8
    - ipBlock:
        cidr: 151.0.0.0/8
    - ipBlock:
        cidr: 152.0.0.0/8
    - ipBlock:
        cidr: 153.0.0.0/8
    - ipBlock:
        cidr: 154.0.0.0/8
    - ipBlock:
        cidr: 155.0.0.0/8
    - ipBlock:
        cidr: 156.0.0.0/8
    - ipBlock:
        cidr: 157.0.0.0/8
    - ipBlock:
        cidr: 158.0.0.0/8
    - ipBlock:
        cidr: 159.0.0.0/8
    - ipBlock:
        cidr: 160.0.0.0/8
    - ipBlock:
        cidr: 161.0.0.0/8
    - ipBlock:
        cidr: 162.0.0.0/8
    - ipBlock:
        cidr: 163.0.0.0/8
    - ipBlock:
        cidr: 164.0.0.0/8
    - ipBlock:
        cidr: 165.0.0.0/8
    - ipBlock:
        cidr: 166.0.0.0/8
    - ipBlock:
        cidr: 167.0.0.0/8
    - ipBlock:
        cidr: 168.0.0.0/8
    - ipBlock:
        cidr: 170.0.0.0/8
    - ipBlock:
        cidr: 171.0.0.0/8
    - ipBlock:
        cidr: 172.0.0.0/8
    - ipBlock:
        cidr: 173.0.0.0/8
    - ipBlock:
        cidr: 174.0.0.0/8
    - ipBlock:
        cidr: 175.0.0.0/8
    - ipBlock:
        cidr: 176.0.0.0/8
    - ipBlock:
        cidr: 177.0.0.0/8
    - ipBlock:
        cidr: 178.0.0.0/8
    - ipBlock:
        cidr: 179.0.0.0/8
    - ipBlock:
        cidr: 180.0.0.0/8
    - ipBlock:
        cidr: 181.0.0.0/8
    - ipBlock:
        cidr: 182.0.0.0/8
    - ipBlock:
        cidr: 183.0.0.0/8
    - ipBlock:
        cidr: 184.0.0.0/8
    - ipBlock:
        cidr: 185.0.0.0/8
    - ipBlock:
        cidr: 186.0.0.0/8
    - ipBlock:
        cidr: 187.0.0.0/8
    - ipBlock:
        cidr: 188.0.0.0/8
    - ipBlock:
        cidr: 189.0.0.0/8
    - ipBlock:
        cidr: 190.0.0.0/8
    - ipBlock:
        cidr: 191.0.0.0/8
    - ipBlock:
        cidr: 192.0.0.0/8
    - ipBlock:
        cidr: 193.0.0.0/8
    - ipBlock:
        cidr: 194.0.0.0/8
    - ipBlock:
        cidr: 195.0.0.0/8
    - ipBlock:
        cidr: 196.0.0.0/8
    - ipBlock:
        cidr: 197.0.0.0/8
    - ipBlock:
        cidr: 198.0.0.0/8
    - ipBlock:
        cidr: 199.0.0.0/8
    - ipBlock:
        cidr: 200.0.0.0/8
    - ipBlock:
        cidr: 201.0.0.0/8
    - ipBlock:
        cidr: 202.0.0.0/8
    - ipBlock:
        cidr: 203.0.0.0/8
    - ipBlock:
        cidr: 204.0.0.0/8
    - ipBlock:
        cidr: 205.0.0.0/8
    - ipBlock:
        cidr: 206.0.0.0/8
    - ipBlock:
        cidr: 207.0.0.0/8
    - ipBlock:
        cidr: 208.0.0.0/8
    - ipBlock:
        cidr: 209.0.0.0/8
    - ipBlock:
        cidr: 210.0.0.0/8
    - ipBlock:
        cidr: 211.0.0.0/8
    - ipBlock:
        cidr: 212.0.0.0/8
    - ipBlock:
        cidr: 213.0.0.0/8
    - ipBlock:
        cidr: 214.0.0.0/8
    - ipBlock:
        cidr: 215.0.0.0/8
    - ipBlock:
        cidr: 216.0.0.0/8
    - ipBlock:
        cidr: 217.0.0.0/8
    - ipBlock:
        cidr: 218.0.0.0/8
    - ipBlock:
        cidr: 219.0.0.0/8
    - ipBlock:
        cidr: 220.0.0.0/8
    - ipBlock:
        cidr: 221.0.0.0/8
    - ipBlock:
        cidr: 222.0.0.0/8
    - ipBlock:
        cidr: 223.0.0.0/8

  podSelector:
    matchLabels:
      app: client-one
  policyTypes:
  - Egress

Verification output:
The demo-app is reachable, the prometheus service is not reachable.
The EC2 metadata API endpoint is not reachable because it was excluded from the ipBlock list, and yahoo.com is reachable.

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I
{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L demo-app.default:8080'
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L prometheus-kube-prometheus-prometheus.monitoring:9090'
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L 169.254.169.254/latest/meta-data/'
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28
169.254.169.254/latest/meta-data/ 000
command terminated with exit code 28

# k get --no-headers pods | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L yahoo.com'
yahoo.com 200
yahoo.com 200
yahoo.com 200
yahoo.com 200
yahoo.com 200
yahoo.com 200
yahoo.com 200
yahoo.com 200
yahoo.com 200
yahoo.com 200

@sjastis sjastis added help wanted Extra attention is needed and removed bug Something isn't working labels Jan 23, 2024
@mw-tlhakhan
Copy link
Author

I need to revisit this, as I believe my network policy that blocks 10.254.0.0/16, which is my Kubernetes service CIDR may not be applicable for network policies. I saw a similar behavior with Calico, and when I blocked the Pod CIDR it worked as expected.

This might be some deep down details about the special nature of the Kubernetes service CIDR network in terms of network policy and kube-proxy implementation.

Here was the network policy I used in Calico, and I believe it would be the same for aws-network-policy. I replaced 10.254.0.0/16 with 10.0.0.0/16. The 10.0.0.0/16 is my Pod CIDR network.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-internet-and-svc-8080
spec:
  egress:
  - ports:
    - port: 53
      protocol: UDP
    to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns

  - ports:
    - port: 8080
      protocol: TCP
    to:
    - ipBlock:
        cidr: 10.0.0.0/16

  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.0.0.0/16

  podSelector:
    matchLabels:
      app: client-one
  policyTypes:
  - Egress

Here is the output, (using Calico). The prometheus-kube-prometheus-prometheus.monitoring service is being properly blocked, which my demo-app.default:8080 is being allowed, according to what I expect from the network policy.

# k get --no-headers pods | grep Running | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L prometheus-kube-prometheus-prometheus.monitoring:9090'
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28
prometheus-kube-prometheus-prometheus.monitoring:9090 000
command terminated with exit code 28

# k get --no-headers pods | grep Running | awk '{print $1}' | grep ^client-one |xargs -I{} bash -c 'kubectl exec {} -- curl -o /dev/null -s --max-time 3 -w "%{url} %{http_code}\n" -L demo-app.default:8080'
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200
demo-app.default:8080 200

I will need to verify this behavior with aws-network-policy-agent in place, but I suspect it is similar.

@Infra-Red
Copy link

Hi @jayanthvn! I see a similar issue in my EKS environment:

> k describe po -n kube-system -l app.kubernetes.io/instance=aws-vpc-cni | grep -A10 'aws-eks-nodeagent:' |grep -e aws-eks-nodeagent -e "Image:" -e enable-network-policy -e Args
  aws-eks-nodeagent:
    Image:         602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon/aws-network-policy-agent:v1.0.7-eksbuild.1
    Args:
      --enable-network-policy=true

I have two egress rules added for 0.0.0.0/0 in the network policy object:

    - ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53
      to:
        - ipBlock:
            cidr: 0.0.0.0/0
    - to:
        - ipBlock:
            cidr: 0.0.0.0/0
            except:
            - 10.0.0.0/8
            - 169.254.0.0/16
            - 172.16.0.0/12
            - 192.168.0.0/16

Applying this configuration results in the rule that only allows 53 port traffic, while I would expect that all ports are allowed as both rules should be merged:

Key : IP/Prefixlen - 0.0.0.0/0 
-------------------
Value Entry :  0
Protocol -  UDP
StartPort -  53
Endport -  0
-------------------
-------------------
Value Entry :  1
Protocol -  TCP
StartPort -  53
Endport -  0

As a workaround, I can add port and endPort parameters for all ports to the network policy which will result in the following rule:

# network policy object
    - ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53
      to:
        - ipBlock:
            cidr: 0.0.0.0/0
    - ports:
        - protocol: UDP
          port: 1
          endPort: 65535
        - protocol: TCP
          port: 1
          endPort: 65535
      to:
        - ipBlock:
            cidr: 0.0.0.0/0
            except:
            - 10.0.0.0/8
            - 169.254.0.0/16
            - 172.16.0.0/12
            - 192.168.0.0/16
# ebpf map content
Key : IP/Prefixlen - 0.0.0.0/0 
-------------------
Value Entry :  0
Protocol -  UDP
StartPort -  53
Endport -  0
-------------------
-------------------
Value Entry :  1
Protocol -  TCP
StartPort -  53
Endport -  0
-------------------
-------------------
Value Entry :  2
Protocol -  UDP
StartPort -  1
Endport -  65535
-------------------
-------------------
Value Entry :  3
Protocol -  TCP
StartPort -  1
Endport -  65535

Do you have any pointers on what could be the root cause? Can we consider this a bug in the network-policy-agent? It seems that something is going wrong when merging two rules and one of the rules doesn't contain any ports configuration.

@achevuru
Copy link
Contributor

@Infra-Red We will need to check it out but I don't see the need for splitting the policy in to two rules (i.e.,) why can't it be a single entry as you want to allow port 53 for all IPs.

        - ipBlock:
            cidr: 0.0.0.0/0
            except:
            - 10.0.0.0/8
            - 169.254.0.0/16
            - 172.16.0.0/12
            - 192.168.0.0/16
   

If the intention was to allow port 53 access for all IPs but to exclude CIDRs under except for all the other port and protocol combinations, then the above policy doesn't look right. One entry says to exclude the CIDRs (under except) for all ports and protocols and the other says to allow port 53 for these same set of CIDRs. Ideally you should instead specify the policy similar to the policy in the prior comments.

@pgier
Copy link

pgier commented Jul 24, 2024

@Infra-Red I have pretty much the same use case as you (allow internal DNS and allow everything else on external IPs)
This type of policy works with Calico, but not with the AWS agent.
aws/amazon-network-policy-controller-k8s#121

@t29-cristian
Copy link

@pgier and @Infra-Red did you end up using Calico as the network policy engine in the end to work around this limitation? I second that something is really odd when you try to use the except block on IPv4 and IPv6 if you're using an IPv6 EKS cluster.

@nileshbhadana
Copy link

Facing same issue while trying to add blanket block rule and then allowing traffic for certain ports. But nodeagent endsup allowing all ports for the CIDR range.

Policy file:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-egress-block
spec:
  podSelector:
    matchLabels:
      app: pod1
  policyTypes:
  - Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-except-dev-cidr-block
spec:
  podSelector:
    matchLabels:
      app: pod1
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 172.10.0.0/16
        - 172.12.0.0/16
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-specific-ports
spec:
  podSelector:
    matchLabels:
      app: pod1
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 172.10.0.0/16
    ports:
    - protocol: TCP
      port: 15432
      endPort: 15432

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

9 participants