-
Notifications
You must be signed in to change notification settings - Fork 258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: avoid resources lock contention utilizing channel #629
base: master
Are you sure you want to change the base?
fix: avoid resources lock contention utilizing channel #629
Conversation
The issue In large clusters where Argo CD monitors numerous resources, the processing of watches becomes significantly slow—in our case (total k8s resources in cluster: ~400k, Pods: ~76k, ReplicaSets: ~52k), taking around 10 minutes. As a result, the Argo CD UI displays outdated information, impacting several features reliant on sync waves, like PruneLast. Eventually, the sheer volume of events from the cluster overwhelmed the system, causing Argo CD to stall completely. To address this, we disabled the tracking of Pods and ReplicaSets, although this compromises one of the main benefits of the Argo CD UI. We also filtered out irrelevant events and tried to optimize various settings in the application controller. However, vertical scaling of the application controller had no effect, and horizontal scaling is not an option for a single cluster due to sharding limitations. Issue causes During the issue investigation, it was found that the problem lies in the following:
Patched v2.10.9 v2.10.9 was patched with the following commits. Though patches significantly improve performance, Argo CD still can not handle the load from large clusters.In the screenshot, you can see one of the largest clusters. Here, the patched with the above commits v2.10.9 build is running.
As can be seen, once pods and rs are enabled to be tracked, the cluster event count falls close to zero, and reconciliation time increases drastically. Number of pods in cluster: ~76k A more detailed comparison of different patched versions is added to this comment - argoproj/argo-cd#8172 (comment) The potential reason is lock contention.Here, a few more metrics were added, and it was found that when the number of events is significant, sometimes it takes ~5 minutes to acquire a lock, which leads to a delay in reconciliation. The suggested fix #602 to optimize the lock usage has not improved the issue in large clusters. Avoid resources lock contention utilizing channel Since we still have significant lock contention in massive clusters, and the approaches above didn’t resolve the issue, another approach has been considered. It is a part of this PR. When we must acquire a write lock in each goroutine, we can’t handle more than one event at a time. What if we introduce the channel where all the received events are sent, and one goroutine is responsible for processing events in batch from the channel? In such a way, the locks from each goroutine are moved to the goroutine, which processes events from the channel. This means we would have only one place where the write lock is acquired; in such a way, we would get rid of the lock contention. The fix resultsAs can be seen from metrics, once the fixed version was deployed and Node, ReplicaSets, and Pods were enabled for tracking, the number of cluster events was stable and didn’t go down. Conclusions |
Your analysis is excruciatingly thorough, I love it! I've posted it to SIG Scalability, and we'll start analyzing ASAP. Please be patient, it'll take us a while to give it a really thorough review. |
@@ -864,6 +910,8 @@ func (c *clusterCache) sync() error { | |||
return err | |||
} | |||
|
|||
go c.processEvents() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this in the sync
function comment:
// When this function exits, the cluster cache is up to date, and the appropriate resources are being watched for
// changes.
If I understand this change correctly (and the associated test changes), by processing these events in a goroutine, we're breaking the guarantee that sync
will completely update the cluster cache. Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes do not break the guarantee that sync
will completely update the cluster cache.
sync
function populates the cluster cache when it's run.
gitops-engine/pkg/cache/cluster.go
Line 934 in a16e663
c.setNode(c.newResource(un)) |
processEvents
goroutine processes the future events that are received in watchEvents
goroutine.
gitops-engine/pkg/cache/cluster.go
Line 962 in a16e663
go c.watchEvents(ctx, api, resClient, ns, resourceVersion) |
watchEvents
goroutine watches for the events from k8s resource types. Once an event is received, it's processed.
gitops-engine/pkg/cache/cluster.go
Line 706 in a16e663
case event, ok := <-w.ResultChan(): |
The event is sent to the channel that is read in the processEvents
goroutine, where the processing is done in bulk.
gitops-engine/pkg/cache/cluster.go
Line 1299 in a16e663
c.eventMetaCh <- eventMeta{event, un} |
@mpelekh would you be interested in joining a SIG Scalability meeting to talk through the changes? |
Could you open an Argo CD PR pointing to this commit so that we can run all Argo's tests? |
a16e663
to
db0f61d
Compare
Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>
db0f61d
to
2f5160b
Compare
Quality Gate passedIssues Measures |
@crenshaw-dev Yes, I’d be happy to join the SIG Scalability meeting to discuss the changes. Please let me know the time and details or if there’s anything specific I should prepare in advance. |
Great! The event is on the Argoproj calendar, and we coordinate in CNCF Slack. The next meeting is two Wednesdays from now at 8am eastern time. No need to prepare anything really, just be prepared to answer questions about the PR. :-) |
@crenshaw-dev Sure. Here it is - argoproj/argo-cd#20329. |
Problem statement is in argoproj/argo-cd#8172 (comment)
The IterateHierrchyV2 significantly improved performance, getting us ~90% of the way there. But on huge clusters, we still have significant lock contention.
The fix in this pull request approaches the problem differently - it avoids lock contention by utilizing a channel to process events from the cluster.
More details are in the comments.