Unit tests verify a crate works in isolation. But when multiple crates form a pipeline — one crate's output feeds another's input — bugs hide at the seams. Cross-crate validation tests catch them.
Consider a Rust ecosystem with layered crates:
rfconversions (unit math)
↓
gainlineup (cascade analysis)
↓
linkbudget (end-to-end link calculations)
Each crate has thorough unit tests. All pass. But a subtle Friis noise figure cascade bug in gainlineup only surfaced when linkbudget consumed its output and the numbers didn't match hand calculations.
The bug: a cascade function applied a formula correctly for two stages but produced wrong results for three or more. Unit tests only covered two-stage cases. A cross-crate test comparing the full pipeline against a textbook example caught it immediately.
Cross-crate validation tests live in the consuming crate — the one that depends on the crate you're validating. They test the boundary contract: "given this input from crate A, does crate B produce the expected end-to-end result?"
Test the interface between two crates with known reference values:
// In linkbudget/tests/cross_crate_validation.rs
use gainlineup::{Block, Lineup};
use rfconversions::{db_to_linear, linear_to_db};
#[test]
fn three_stage_cascade_matches_textbook() {
// Pozar, Microwave Engineering, Example 10.3
let blocks = vec![
Block::new("LNA", 12.0, 1.5), // gain_db, nf_db
Block::new("Mixer", -6.0, 8.0),
Block::new("IF Amp", 30.0, 2.0),
];
let lineup = Lineup::from_blocks(&blocks);
let system_nf = lineup.cascade_noise_figure_db();
// Textbook answer: 1.83 dB
assert!(
(system_nf - 1.83).abs() < 0.01,
"System NF {system_nf:.2} dB, expected 1.83 dB"
);
}
Verify that converting values through multiple crates and back yields the original:
#[test]
fn db_linear_round_trip_through_cascade() {
let gain_db = 15.0_f64;
let linear = rfconversions::db_to_linear(gain_db);
let back_to_db = rfconversions::linear_to_db(linear);
assert!(
(gain_db - back_to_db).abs() < 1e-12,
"Round-trip failed: {gain_db} → {linear} → {back_to_db}"
);
}
When two crates can compute the same quantity through different paths, they should agree:
#[test]
fn system_noise_temp_two_methods_agree() {
let nf_db = 2.5;
let t_ref = 290.0; // Kelvin
// Method 1: rfconversions direct formula
let t_sys_direct = rfconversions::noise_figure_to_temp(nf_db, t_ref);
// Method 2: gainlineup cascade (single block)
let lineup = Lineup::single_block("DUT", 0.0, nf_db);
let t_sys_cascade = lineup.system_noise_temperature(t_ref);
assert!(
(t_sys_direct - t_sys_cascade).abs() < 1e-10,
"Methods disagree: {t_sys_direct} vs {t_sys_cascade}"
);
}
Two options, each with trade-offs:
linkbudget/
├── src/
├── tests/
│ ├── cross_crate_validation.rs ← boundary tests
│ └── integration.rs
└── Cargo.toml
Integration tests in tests/ compile as separate binaries and can only access the public API — exactly what you want for boundary validation.
For large workspaces, a dedicated validation crate:
workspace/
├── rfconversions/
├── gainlineup/
├── linkbudget/
└── validation-tests/ ← depends on all crates
├── src/lib.rs
├── tests/
│ ├── cascade.rs
│ ├── round_trips.rs
│ └── consistency.rs
└── Cargo.toml
# validation-tests/Cargo.toml
[package]
name = "validation-tests"
publish = false
[dev-dependencies]
rfconversions = { path = "../rfconversions" }
gainlineup = { path = "../gainlineup" }
linkbudget = { path = "../linkbudget" }
This keeps validation tests out of published crate code and makes the dependency graph explicit.
Focus on boundaries where bugs actually hide:
| Test Type | What It Catches |
|---|---|
| Reference values | Formula errors, off-by-one in cascades |
| Round trips | Precision loss, unit confusion (dB vs linear) |
| Consistency | Divergent implementations of the same concept |
| Edge cases | Zero gain, infinite attenuation, single-block degenerate cases |
| Dimension checks | Watts vs dBm, Hz vs GHz mismatches |
Don't test internal implementation. Cross-crate tests should use only public APIs. If you're reaching into private modules, you're writing unit tests in the wrong place.
Don't duplicate unit tests. If rfconversions already tests db_to_linear(3.0), don't retest the same case. Test the boundary — what happens when that value flows into the next crate.
Don't mock the dependency. The whole point is testing the real crate. Mocking defeats the purpose.
Run cross-crate tests as part of the normal test suite. No special configuration needed — cargo test --workspace already runs integration tests in every crate:
# .github/workflows/ci.yml
- name: Run all tests (including cross-crate)
run: cargo nextest run --workspace
If validation tests are slow (large reference datasets), gate them behind a feature flag:
[features]
slow-validation = []
#[cfg(feature = "slow-validation")]
#[test]
fn exhaustive_modcod_ladder_check() {
// ...
}