Replies: 4 comments 9 replies
-
RKE2 has a konektivity service equivalent built in to the main rke2 process. It is configured via the --egress-selector-mode option. The content covering this flag has not yet been migrated over to the RKE2 docs, you can reference the k3s docs for this feature at https://docs.k3s.io/installation/network-options#control-plane-egress-selector-configuration |
Beta Was this translation helpful? Give feedback.
-
Thank you @brandond. I did used the flag but I might be doing something wrong due to which I am not seeing the correct behavior. Listed below is my setup. RKE2 version: v1.28.3+rke2r1 I created 1 master node in region A and 1 worker node in region B. Each node has 2 ip-addresses on eth0: public ip (global) and private ip (local to region). Config used to create master1 node (region A):
Config used to create worker1 node (region B):
I can see both the nodes ready:
After this, I tried running nginx pod on worker1 and tried to access the service using api proxy, but its unable to route the traffic from apiserver to the nginx pod.
I tried different egress-selector modes but nothing has helped. I am not sure if its because I am using private node ip's and due to which cilium and vxlan are not happy. If I use public ip for node-ip in both master and worker, then it works. My assumption was that service like konnectivity allows control plane to reach worker nodes which might be behind a firewall by using forward proxy. However, if I use private ip for worker nodes, it fails to route the traffic to worker nodes. Hence confused as to where I might be thinking wrong. Update: |
Beta Was this translation helpful? Give feedback.
-
The konnectivity/egress stuff ONLY handles connections from the apiserver out to things running in the cluster. If you want pods (other than the apiserver) that are running on the server to be able to reach things hosted by pods running on the agents, then you need to ensure that your CNI works properly over the links between the two nodes. Note that most CNIs use vxlan as their default transport, which is NOT secure and should not be run over the internet. |
Beta Was this translation helpful? Give feedback.
-
While trying out a bare-metal setup, I thought disabling the cloud controller was a good idea, and that led me to the same issue with Node IPs not being set. Is it worth adding a small blurb in the documentation mentioning this is not recommended? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi, I am trying to configure konnectivity service with RKE2. By default, I am seeing egress-selector-config.yaml configured for apiserver and it has following content:
If I try to change it, it gets auto-generated/overwritten whenever rke2-server.service is restarted.
In my setup, I am running control plane remotely (servers in one datacenter and workers in another). Worker nodes are able to join the cluster and pods are getting scheduled.
I am able to get konnectivity-server and konnectivity-agent installed on my rke2 cluster.
I think for apiserver proxy, its using port 9345 instead of 8132 which I had configured for konnectivity-server. I am not sure if its coming because of the TCP URL set in egress-selector-config.yaml file.
Am I reading it correct that its using 9345 instead of 8132 for konnectivity-service? Is there any documentation on how to setup konnectivity-service with rke2? Kindly do let me know.
Beta Was this translation helpful? Give feedback.
All reactions