-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set worker_processes to a static number #140
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The linked issue describes: "allow user to configure", and the PR is setting a default for everyone. Why was chosen to change the default and not make it configurable for the exception (which apparently is a node with a lot of (v)CPUs)? Why was 4 determined as the reasonable default? Why won't this affect upgraded clusters that move from auto to 4?
Hi @superseb, the issue is related, it's not a direct fix, however the reason I could see to set a default:
nginx is capable of many requests with a small number of workers when acting as a reverse proxy, for the purpose of kubelet -> kube-apiserver of a single node 4 was determined as adequate. Totally open to input here, the intention is not to choose a particular value but to avoid nginx-proxy inadvertently consuming large amounts of PIDs, file handles etc. without a way to avoid it. |
@dkeightley Given the issue (which has not seen any activity so far), it is about limiting nginx-proxy worker_processes so it doesn't configure 100 processes when 100 (v)CPUs/cores are found. If we set it to 4, what are the performance implications on 2 and 4 core machines? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See ^^. And rebase and push so build can complete.
Thanks @superseb, performance changes should be minimal with a small number of process increase, compared to nodes with high core counts. If you think it's a better fit for nginx-proxy, |
I guess you are right but its very simple to test. Is this change tested or is this based on assumption/guessing? The only downside to setting it statically is that we are changing it for all installs at once with no way to change it back to the old behavior. Please rebase instead of adding a merge commit. (or as it is now, squash commits) |
7931858
to
8816bdd
Compare
Testing performed using
Tested on a node with 2 CPUs: # cat /proc/cpuinfo | grep processor | wc -l
2 Worker node with default rke-tools image: # docker ps | grep nginx-proxy
ca6a69f06d84 rancher/rke-tools:v0.1.80 "nginx-proxy CP_HOST…" 30 minutes ago Up 30 minutes nginx-proxy
# docker exec nginx-proxy ps aux | grep worker
12 nginx 0:00 nginx: worker process
13 nginx 0:00 nginx: worker process
# docker exec nginx-proxy grep worker_processes /etc/nginx/nginx.conf
worker_processes auto; Worker node with updated image: # docker ps | grep nginx-proxy
9d8ebc974a85 derekdemo/rke-tools:worker_processes "nginx-proxy CP_HOST…" 20 seconds ago Up 18 seconds nginx-proxy
# docker exec nginx-proxy ps aux | grep worker
13 nginx 0:00 nginx: worker process
14 nginx 0:00 nginx: worker process
15 nginx 0:00 nginx: worker process
16 nginx 0:00 nginx: worker process
# docker exec nginx-proxy grep worker_processes /etc/nginx/nginx.conf
worker_processes 4; No issues observed from the kubelet logs with the new container image, node was active/Ready from the Kubernetes perspective. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will at least need some basic load testing, but I'm going to assume that the downside (having too many workers) are worse than the upside.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with the change as it seems to resolve an issue for support, I don't think the testing was sufficient but if QA is going to test it in bigger clusters/with more load, it should be okay.
Good to merge with
|
Thanks @annablender, the testing so far has been using a drop in replacement of the proposed rke-tools container to confirm:
A load test would be worthwhile, however it's expected that the value for Just for clarity, rke-tools used in this scenario is used only to run the
The change only effects the kubelet connectivity to the kube-apiserver. Setting # docker exec -it nginx-proxy grep worker /etc/nginx/nginx.conf
worker_processes 4;
worker_connections 1024;
|
Anything else we need to do to move forward with QA? |
On clusters with large nodes the
auto
value forworker_processes
can scale too high, with the number of threads (32 per worker) this can consume many file handles.Determined 4 as a reasonable default for the use case.
Related: rancher/rancher#27693