Kubelet Liveness Probe Fails until 6443 is allowed in the Egress Network Policy #5439
Unanswered
LongBeachHXC
asked this question in
Q&A
Replies: 1 comment 1 reply
-
I don't know anything about your application and you haven't provided any details, but I'm guessing it must need access to Kubernetes APIs to pass health checks? The apiserver does not do health checking, health checks occur locally between the kubelet and the processes running in the pod. They also do not require any network policy to allow, as they occur within the pod sandbox. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I apologize if this is documented somewhere but I haven't been able to find it.
If someone could shed some light on this, it would be greatly appreciated.
I have a default deny in every namespace. So, if it isn't explicitly allowed, nothing will be allowed to go anywhere.
In my applications namespace, I have a network policy that allows several different ports ingress and egress to and from 0.0.0.0/0.
I deploy my application and the pods never go into a ready state because they fail their liveness probes.
For testing, I added an allow all and immediately my pods went into a Ready state. This confirms my issue is with the network policies.
I found this issue in the Calico repo that talks about requiring access to the K8s API Server. As a result, I removed the allow all policy I was testing with, modified the original network policy to allow 6443 egress and my pods immediately went into a Ready state.
I'm wondering why is it required to open access up to the API server for the liveness probes?
Thank you for your help.
Beta Was this translation helpful? Give feedback.
All reactions