Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MultiProcessCollector._accumulate_metrics always drops timestamps #1056

Open
roganartu opened this issue Aug 19, 2024 · 1 comment · May be fixed by #1057
Open

MultiProcessCollector._accumulate_metrics always drops timestamps #1056

roganartu opened this issue Aug 19, 2024 · 1 comment · May be fixed by #1057

Comments

@roganartu
Copy link

Summary

MultiProcessCollector._accumulate_metrics drops timestamps for all exported metrics, regardless of whether the selected sample or multiproc_mode used or needs them. This causes metrics file compaction routines (to cleanup old DB files from long-running web worker processes, for example) to drop samples that they otherwise wouldn't.

Details

MultiProcessCollector.merge calls _read_metrics to read all metrics (including dupes) from the DB files, followed by _accumulate_metrics to dedupe these based on multiproc_mode.
https://github.com/prometheus/client_python/blob/master/prometheus_client/multiprocess.py#L35-L44

In the docstring, it explicitly calls out the use case this affects (compaction by writing back to the mmap files):

But if writing the merged data back to mmap files, use
        accumulate=False to avoid compound accumulation.

_accumulate_metrics considers the timestamp when deciding which sample to keep:
https://github.com/prometheus/client_python/blob/master/prometheus_client/multiprocess.py#L97-L116

However, it does not include this timestamp in the recreated sample it returns:
https://github.com/prometheus/client_python/blob/master/prometheus_client/multiprocess.py#L153

Impact

We have an internal compaction tool that effectively just periodically calls the MultiProcessCollector.merge method periodically, wrapped with an flock, and given the prevalence of open issues such as #568 I suspect others may too. Without this, long-running gunicorn processes with worker rotation settings will accumulate large numbers of stale files that slow scrape times significantly. This compaction ignores live pids, only compacting DBs from dead ones.

This issue results in this compaction potentially dropping samples that would otherwise have been the newest timestamp, simply because they've been compacted from a dead pid. Consider the following:

t1. process 1 and 2 spawn and define gauge my_gauge with multiproc_mode="mostrecent"
t2. process 1 samples my_gauge with timestamp=time.time()
t3. process 2 samples my_gauge with timestamp=time.time()
t4. process 2 dies
t5. compaction calls MultiProcessCollector.merge as part of stale DB compaction.
t6. MultiProcessCollector._accumulate_metrics returns the process 2 sample without a timestamp, which is then written to the compacted DB
t7. both future compaction and regular metrics exposition (via a scrape or otherwise) now drop the process 2 sample despite it being newest

Proposed Solution

Since the collector already tracks the sample timestamp in a defaultdict(float) and collection considers 0.0 and None equivalent, I think it should be as simple as changing the exported sample to this (or the equivalent):

timestamped_samples = []
for (name, label), value in samples.items():
    without_pid_key = (name, tuple(l for l in labels if l[0] != 'pid'))
    timestamped_samples.append(
        prometheus_client.samples.Sample(
            name_, dict(labels), value, sample_timestamps[without_pid_key]
        )
    )
metric.samples = timestamped_samples
@petzeb
Copy link

petzeb commented Oct 14, 2024

Stumbled across this issue. We recently discovered that we also need to run a compaction routine when using this prometheus client in multiprocessing mode. The way our system runs it spawns a subprocess for each request it handles. This lead to us fairly quickly accumulating lots and lots of metrics db files, and after a while it becomes too slow to realistically scrape.

We have an internal compaction tool that effectively just periodically calls the MultiProcessCollector.merge method periodically, wrapped with an flock, and given the prevalence of open issues such as #568 I suspect others may too.

I can definitely confirm that others do at least something similar. This makes me wonder if some form of compaction routine should be added to this library?

As mentioned there are a few open issues that seem to be at least somewhat indicative that this is a general problem. Is adding general support for compaction of dead process metrics a feature that seems reasonable to add?

CC: @roganartu @csmarchbanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants