Rust's built-in #[bench] requires nightly. Criterion.rs works on stable, provides statistical analysis, and generates HTML reports with confidence intervals.
Add to Cargo.toml:
[dev-dependencies]
criterion = { version = "0.5", features = ["html_reports"] }
[[bench]]
name = "my_benchmarks"
harness = false
Create benches/my_benchmarks.rs:
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn fibonacci(n: u64) -> u64 {
match n {
0 => 0,
1 => 1,
n => fibonacci(n - 1) + fibonacci(n - 2),
}
}
fn bench_fibonacci(c: &mut Criterion) {
c.bench_function("fib 20", |b| b.iter(|| fibonacci(black_box(20))));
}
criterion_group!(benches, bench_fibonacci);
criterion_main!(benches);
# Run all benchmarks
cargo bench
# Run a specific benchmark by name
cargo bench -- "fib 20"
# Skip benchmarks during testing
cargo test --benches
black_boxPrevents the compiler from optimizing away your computation. Always wrap inputs:
b.iter(|| my_function(black_box(input)))
Without it, the compiler might evaluate my_function at compile time and benchmark a no-op.
Compare related functions side-by-side:
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
fn bench_conversions(c: &mut Criterion) {
let mut group = c.benchmark_group("power_conversions");
for &dbm in &[0.0, 10.0, 30.0, -20.0] {
group.bench_with_input(
BenchmarkId::new("dbm_to_watts", dbm),
&dbm,
|b, &val| b.iter(|| 10f64.powf((val - 30.0) / 10.0)),
);
}
group.finish();
}
criterion_group!(benches, bench_conversions);
criterion_main!(benches);
For data-processing benchmarks, report bytes/sec:
use criterion::Throughput;
group.throughput(Throughput::Bytes(data.len() as u64));
group.bench_function("parse", |b| b.iter(|| parse(black_box(&data))));
Criterion outputs three key numbers:
my_function time: [358.12 ps 359.48 ps 361.02 ps]
These are the lower bound, estimate, and upper bound of the mean execution time at 95% confidence. On subsequent runs, it also reports whether performance changed:
Performance has improved.
time: [-3.2145% -2.8901% -2.5013%] (p = 0.00 < 0.05)
After running cargo bench, open target/criterion/report/index.html for interactive charts showing distributions, regression analysis, and comparisons against previous runs.
From rfconversions — 17 benchmarks across power, frequency, noise, and compression modules:
| Operation | Typical Time |
|---|---|
| Arithmetic (dBm ↔ dBW) | ~350 ps |
| Log/exp (dBm → watts) | ~2 ns |
| Cascade (P1dB chain) | ~8 ns |
Sub-nanosecond for simple conversions confirms these are suitable for hot loops in real-time RF processing.
--bench flag in CI to compile benchmarks without running them (catches build regressions)cargo bench --bench my_benchmarks -- --save-baseline main before changes, then compare with --baseline mainAdd a convenient task runner recipe:
bench:
cargo bench