Differential Privacy Through Depth-Selective Routing

by Nick Clark | Published March 27, 2026 | PDF

Differential privacy in training traditionally relies on noise injection that degrades model quality. Depth-selective gradient routing offers an alternative: privacy guarantees achieved through structural isolation rather than noise. By restricting sensitive content to shallow layers, the architecture ensures that sensitive information cannot influence deep representations while maintaining full model quality in unrestricted layers.


What It Is

Privacy through depth-selective routing restricts content classified as sensitive to shallow model layers. The sensitive content can inform surface-level behavior (response style, topic awareness) without embedding in the deep representations that form the model's core knowledge. This structural isolation provides privacy guarantees without the quality degradation of noise-based differential privacy.

Why It Matters

Noise-based differential privacy requires adding calibrated noise to gradients, which degrades model quality proportionally to the privacy guarantee. Stronger privacy guarantees require more noise, producing worse models. Depth-selective routing provides privacy through structure rather than noise, avoiding this quality-privacy tradeoff.

How It Works

Sensitive content is assigned a shallow depth profile that restricts its gradients to upper model layers. These layers capture style and surface patterns. Deep layers that capture fundamental knowledge representations receive no gradient from sensitive content. The sensitive information influences how the model responds but not what it fundamentally knows.

This approach can be combined with noise-based methods for additional guarantees, but the structural isolation alone provides meaningful privacy properties.

What It Enables

Depth-selective privacy enables training on sensitive data with structural privacy guarantees and minimal quality impact. Medical models can train on patient data that informs clinical reasoning patterns without memorizing patient specifics. Legal models can train on confidential case files that inform legal reasoning without encoding case details. The structural approach provides both privacy and utility.

Nick Clark Invented by Nick Clark Founding Investors: Devin Wilkie