Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: Enable issue_comment actions on forked PRs #816

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 19 additions & 19 deletions .github/workflows/bench-pr-comment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,34 +13,33 @@ concurrency:

jobs:
cpu-benchmark:
name: run end2end benchmark
name: run fibonacci benchmark
runs-on: buildjet-32vcpu-ubuntu-2204
if:
github.event.issue.pull_request
&& github.event.issue.state == 'open'
&& contains(github.event.comment.body, '!benchmark')
&& (github.event.comment.author_association == 'MEMBER' || github.event.comment.author_association == 'OWNER')
steps:
- uses: xt0rted/pull-request-comment-branch@v2
id: comment-branch
- uses: actions/checkout@v4
if: success()
with:
ref: ${{ steps.comment-branch.outputs.head_ref }}
- name: Checkout PR branch
run: gh pr checkout $PR_NUMBER
env:
GH_TOKEN: ${{ github.token }}
PR_NUMBER: ${{ github.event.issue.number }}
# Install dependencies
- name: Install dependencies
run: sudo apt-get install -y pkg-config libssl-dev
# Set the Rust env vars
- uses: actions-rs/toolchain@v1
- uses: Swatinem/rust-cache@v2
# Run the comparative benchmark and comment output on the PR
- uses: boa-dev/criterion-compare-action@v3
with:
# Optional. Compare only this benchmark target
benchName: "fibonacci_lem"
# Needed. The name of the branch to compare with
branchName: ${{ github.ref_name }}

# TODO: Check it works with forked PRs when running
# `gh pr checkout {{ github.event.issue.number}}` with `env: GH_TOKEN`
gpu-benchmark:
name: run fibonacci benchmark on GPU
runs-on: [self-hosted, gpu-bench]
Expand All @@ -50,38 +49,39 @@ jobs:
&& contains(github.event.comment.body, '!gpu-benchmark')
&& (github.event.comment.author_association == 'MEMBER' || github.event.comment.author_association == 'OWNER')
steps:
- uses: actions/checkout@v4
- name: Checkout PR branch
run: gh pr checkout $PR_NUMBER
env:
GH_TOKEN: ${{ github.token }}
PR_NUMBER: ${{ github.event.issue.number }}
# Set up GPU
# Check we have access to the machine's Nvidia drivers
- run: nvidia-smi
# The `compute`/`sm` number corresponds to the Nvidia GPU architecture
# In this case, the self-hosted machine uses the Ampere architecture, but we want this to be configurable
# See https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
# Writes env vars to `bench.env` to be read by `just` command
# Writes env vars to `bench.env` to be read by `dotenv` action. This is roundabout but puts all the env vars in one place
- name: Set env for CUDA compute
run: echo "CUDA_ARCH=$(nvidia-smi --query-gpu=compute_cap --format=csv,noheader | sed 's/\.//g')" >> bench.env
- name: set env for EC_GPU
run: echo 'EC_GPU_CUDA_NVCC_ARGS=--fatbin --gpu-architecture=sm_${{ env.CUDA_ARCH }} --generate-code=arch=compute_${{ env.CUDA_ARCH }},code=sm_${{ env.CUDA_ARCH }}' >> bench.env
# Check that CUDA is installed with a driver-compatible version
# This must also be compatible with the GPU architecture, see above link
- run: nvcc --version

- uses: xt0rted/pull-request-comment-branch@v2
id: comment-branch
- uses: actions/checkout@v4
if: success()
with:
ref: ${{ steps.comment-branch.outputs.head_ref }}
# Install dependencies
- uses: actions-rs/toolchain@v1
- uses: Swatinem/rust-cache@v2
# Strict load => panic if .env file not found
- name: Load env vars
uses: xom9ikk/dotenv@v2
with:
path: bench.env
# Strict load => panic if .env file not found
load-mode: strict

# Run the comparative benchmark and comment output on the PR
- uses: boa-dev/criterion-compare-action@v3
with:
# Note: Removing `benchName` causes `criterion` errors: https://github.com/boa-dev/criterion-compare-action#troubleshooting
# Optional. Compare only this benchmark target
benchName: "fibonacci_lem"
# Optional. Features activated in the benchmark
Expand Down
Loading