Multi-Modality Cooperative Ranging

by Nick Clark | Published April 25, 2026 | PDF

Mesh-derived coordinates produce positions through mutual ranging across fifteen-plus modalities — UWB time-of-flight, lidar reflection, radar, optical fiducial range, RFID proximity, NFC adjacency, acoustic echo, BLE RSSI, magnetic dipole, GNSS pseudorange, inertial integration, visual SLAM correspondence, and others.


What Multi-Modality Ranging Specifies

The architecture treats positioning as a multilateration over heterogeneous range observations. Each ranging modality contributes credentialed observations: UWB time-of-flight produces sub-decimeter range estimates between cooperative units; lidar reflection produces range observations against marked surfaces; radar produces longer-range observations under varied conditions; optical fiducial range produces visual-distance observations against credentialed markers; RFID and NFC produce proximity observations.

Each contribution is governance-credentialed: the contributing unit signs the observation, the range modality and uncertainty are declared, and the observation is recorded with lineage. The receiving unit evaluates each observation for admissibility before integrating it into the coordinate solution.

Why Multi-Modality Beats Single-Modality Hardening

Single-modality positioning systems (GNSS-only, UWB-only, visual-SLAM-only) face structural failure when their primary modality is denied or degraded. GNSS jamming, UWB interference, optical denial, lidar blinding all produce single-modality outages.

Multi-modality cooperative ranging produces resilience structurally. Loss of any single modality reduces position confidence but doesn't eliminate it; the remaining modalities continue to contribute. The architecture supports operation across the full range of denial scenarios that single-modality systems cannot handle.

How Modalities Compose in the Coordinate Solution

Each modality contributes observations with declared uncertainty. The multilateration solution weights observations by their declared uncertainty; high-confidence modalities (UWB at short range) contribute more strongly than low-confidence modalities (BLE RSSI at moderate range). The composite solution captures the best estimate from the available evidence.

Cross-modality cross-checks operate structurally. When UWB and lidar observations agree, confidence increases. When they disagree, the disagreement surfaces as a credentialed observation. The architecture supports diagnostic differentiation between sensor failure, environmental anomaly, and adversarial interference.

What This Enables for Resilient Positioning

Defense and contested-environment operations gain structural resilience that single-modality hardening cannot match. Civilian deployments in challenging environments (urban canyons, dense indoor, mining operations) gain the same.

The architecture also supports gradual modality adoption. New modalities (improved UWB, emerging visible-light positioning, satellite L-band PNT, terrestrial broadcast-positioning) integrate as additional credentialed observation sources without architectural rebuild.

Nick Clark Invented by Nick Clark Founding Investors: Devin Wilkie