Here are some recent and important revisions. 👉 Complete list of results.
Most recent pystats on main (f6cc7c8)
date | fork/ref | hash/flags | vs. 3.12.6: | vs. 3.13.0rc2: | vs. base: |
---|---|---|---|---|---|
2024-10-22 | python/34653bba644aa5481613 | 34653bb | 1.03x ↓ 📄📈 |
1.05x ↓ 📄📈 |
|
2024-10-22 | python/34653bba644aa5481613 | 34653bb (NOGIL) | 1.48x ↓ 📄📈 |
1.50x ↓ 📄📈 |
1.43x ↓ 📄📈🧠 |
2024-10-21 | python/d0bfff47fb2aea9272b5 | d0bfff4 | 1.00x ↑ 📄📈 |
1.01x ↓ 📄📈 |
date | fork/ref | hash/flags | vs. 3.12.6: | vs. 3.13.0rc2: | vs. base: |
---|---|---|---|---|---|
2024-10-29 | python/d4b6d84cc84029b598fc | d4b6d84 | 1.00x ↓ 📄📈 |
1.02x ↓ 📄📈 |
|
2024-10-29 | python/d4b6d84cc84029b598fc | d4b6d84 (NOGIL) | 1.55x ↓ 📄📈 |
1.58x ↓ 📄📈 |
1.55x ↓ 📄📈🧠 |
2024-10-28 | python/85799f1ffd5f285ef93a | 85799f1 (NOGIL) | 1.55x ↓ 📄📈 |
1.57x ↓ 📄📈 |
1.55x ↓ 📄📈🧠 |
2024-10-28 | python/85799f1ffd5f285ef93a | 85799f1 | 1.00x ↑ 📄📈 |
1.01x ↓ 📄📈 |
|
2024-10-27 | python/19e93e2e269889ecb3c4 | 19e93e2 (NOGIL) | 1.54x ↓ 📄📈 |
1.56x ↓ 📄📈 |
1.53x ↓ 📄📈🧠 |
2024-10-27 | python/19e93e2e269889ecb3c4 | 19e93e2 | 1.00x ↓ 📄📈 |
1.02x ↓ 📄📈 |
|
2024-10-26 | python/f6cc7c8bd01d8468af70 | f6cc7c8 | 1.00x ↑ 📄📈 |
1.01x ↓ 📄📈 |
|
2024-10-26 | python/f6cc7c8bd01d8468af70 | f6cc7c8 (NOGIL) | 1.55x ↓ 📄📈 |
1.57x ↓ 📄📈 |
1.55x ↓ 📄📈🧠 |
*
indicates that the exact same versions of pyperformance was not used.
Improvement of the geometric mean of key merged benchmarks, computed with pyperf compare
.
The results have a resolution of 0.01 (1%).
Visit the 🔒 benchmark action and click the "Run Workflow" button.
The available parameters are:
fork
: The fork of CPython to benchmark. If benchmarking a pull request, this would normally be your GitHub username.ref
: The branch, tag or commit SHA to benchmark. If a SHA, it must be the full SHA, since finding it by a prefix is not supported.machine
: The machine to run on. One oflinux-amd64
(default),windows-amd64
,darwin-arm64
orall
.benchmark_base
: If checked, the base of the selected branch will also be benchmarked. The base is determined by runninggit merge-base upstream/main $ref
.pystats
: If checked, collect the pystats from running the benchmarks.
To watch the progress of the benchmark, select it from the 🔒 benchmark action page. It may be canceled from there as well. To show only your benchmark workflows, select your GitHub ID from the "Actor" dropdown.
When the benchmarking is complete, the results are published to this repository and will appear in the complete table. Each set of benchmarks will have:
- The raw
.json
results from pyperformance. - Comparisons against important reference releases, as well as the merge base of the branch if
benchmark_base
was selected. These include- A markdown table produced by
pyperf compare_to
. - A set of "violin" plots showing the distribution of results for each benchmark.
- A markdown table produced by
The most convenient way to get results locally is to clone this repo and git pull
from it.
To automate benchmarking runs, it may be more convenient to use the GitHub CLI.
Once you have gh
installed and configured, you can run benchmarks by cloning this repository and then from inside it:
gh workflow run benchmark.yml -f fork=me -f ref=my_branch
Any of the parameters described above are available at the commandline using the -f key=value
syntax.
To collect Linux perf sampling profile data for a benchmarking run, run the _benchmark
action and check the perf
checkbox.
Follow this by a run of the _generate
action to regenerate the plots.
This repo is licensed under the BSD 3-Clause License, as found in the LICENSE file.