Differential Privacy Through Depth-Selective Routing

by Nick Clark | Published March 27, 2026 | PDF

Conventional differential privacy in machine learning treats noise injection as the primary instrument of guarantee: gradients are clipped, calibrated Gaussian or Laplace noise is added, and a privacy budget is depleted across training steps. The structural alternative disclosed here treats privacy as an admissibility property of credentialed contributions and binds it to a depth-selective routing discipline, where per-contribution noise bounds are signed at the moment a contributor is admitted to the training corpus and where the contribution's gradients are restricted to a declared band of model depth. The result is a training-governance primitive in which differential privacy is not a post-hoc statistical bolt-on but a composed admissibility condition: contribution credential, signed noise bound, and depth profile travel together, and the privacy ledger composes deterministically with the credentialed-contribution-attestation primitive.


Mechanism

The mechanism operates as a four-stage pipeline interposed between a training corpus and a parameter-update step. In the first stage, each candidate contribution is admitted only when accompanied by a credentialed-contribution attestation that binds the contributor identity, the data class, and a per-contribution noise bound. The attestation is a detached signature over a canonical contribution descriptor, and it is verified by the admissibility gate before the contribution enters any batch. Contributions that lack an attestation, that present an attestation outside its validity window, or that declare a noise bound below a corpus-level floor are rejected at the gate and never reach the gradient computation.

In the second stage, an admissibility resolver reads the attestation and emits two derived artifacts. The first is a noise schedule expressed as a per-step variance ceiling, computed from the signed bound and the contribution's expected occurrence count over the training horizon. The second is a depth profile, which is a structured descriptor specifying which contiguous bands of model layers may receive gradient signal from this contribution. The depth profile is not a free parameter chosen by the optimizer; it is a function of the admissibility class declared in the attestation, and it is the same function across all contributions of that class so that aggregate privacy accounting remains tractable.

In the third stage, gradient computation proceeds normally up to the backward pass, but the backward pass is intercepted by a routing layer that consults the depth profile. Gradients flowing into layers outside the permitted band are zeroed before the optimizer state is updated, and gradients flowing into permitted layers are clipped per-sample to the signed bound and perturbed with calibrated noise drawn from the schedule. Because the zeroing happens before the optimizer state mutates, momentum buffers and second-moment estimates of restricted layers carry no trace of the contribution, eliminating the indirect leakage that often defeats naive gradient masking.

In the fourth stage, the privacy ledger records, for each accepted contribution, the canonical descriptor hash, the depth profile applied, the realized noise variance, and the cumulative privacy expenditure under a chosen accountant. The ledger is append-only and is itself signed at epoch boundaries, producing a tamper-evident record that downstream auditors can verify without access to the contribution payloads. The composition of structural isolation, signed per-contribution bounds, and ledgered accounting yields a stronger and more verifiable guarantee than any of the three components in isolation.

Operating Parameters

The depth profile is parameterized by a contiguous index range over the model's layer stack, expressed either as absolute layer indices or as normalized depth fractions. In practice, sensitive contribution classes are restricted to the upper twenty to thirty percent of the layer stack, where representations are closer to surface form and further from the entangled core that drives generalization. A profile may also include a head mask that restricts contribution to specific attention heads or expert routes within a permitted layer, which is useful when the model employs mixture-of-experts or grouped-query attention and a finer-grained restriction is desirable.

Per-contribution noise bounds are expressed as a clipping norm and a noise multiplier, and the admissibility gate enforces a corpus-level minimum on both. The admissibility class declared in the attestation determines which preset of clipping norm and multiplier applies, and the resolver refuses any attestation whose declared bound is more permissive than the floor for its class. The privacy accountant operates over the per-contribution variances and the contribution-occurrence counts, and it supports both Renyi differential privacy accounting and the moments accountant, with the choice fixed at corpus-initialization time and recorded in the ledger header.

Operationally, the routing layer is implemented as a hook on the autograd engine that receives the per-sample gradient tensors and the depth profile as inputs and returns the masked, clipped, and perturbed tensors. The hook is benchmarked to add less than five percent overhead at typical batch sizes because the masking is a tensor-level operation and the clipping reuses the per-sample-gradient infrastructure already required for any sample-level differential-privacy implementation. The signing of noise bounds uses a standard detached-signature scheme over a canonical CBOR encoding, and verification is performed once at admission and cached for the duration of an epoch.

The ledger is parameterized by an epoch boundary that controls the granularity of signed checkpoints, by a retention horizon that bounds how long the per-contribution descriptors remain in the ledger before they are summarized into class-level aggregates, and by a deletion-event channel that the ledger consumes to record contribution withdrawals. Deletion events do not erase prior ledger entries; they append a withdrawal record that downstream consumers, including inference-time admissibility gates, treat as authoritative. The ledger header records the accountant choice, the corpus-level floors on clipping norm and noise multiplier, the depth-profile schema version, and the public keys of the signing parties, which together fix the verification contract for any later auditor.

Alternative Embodiments

A first alternative embodiment replaces the contiguous depth band with a sparse depth set, in which the permitted layers are an arbitrary subset of the layer stack rather than a contiguous range. This is useful when the model exhibits a known modular decomposition, such as separate vision and language towers, and the admissibility class wishes to admit contribution into a specific tower without admitting it into adjacent layers of the same depth in the other tower.

A second alternative embodiment substitutes the per-sample noise injection with a per-microbatch injection in which calibrated noise is added once per microbatch after per-sample clipping. This trades a small loss of accounting tightness for a substantial reduction in the variance of the noise estimator and is appropriate when training-step throughput is the binding constraint.

A third alternative embodiment couples the depth profile to a parameter-efficient adaptation surface such as a low-rank adapter or a prefix-tuning module, so that sensitive contributions update only the adapter parameters and never the base weights. The admissibility ledger records the adapter identity rather than a depth band, and the privacy guarantee composes with the adapter's deployment scope.

A fourth alternative embodiment implements the routing layer as a compiled graph transformation rather than a runtime hook, in which the masking and clipping operations are fused into the backward graph at graph-build time. This embodiment is appropriate for production training stacks that compile the training step ahead of time and is interoperable with the runtime-hook embodiment because both produce identical numerical outputs.

A fifth alternative embodiment generalizes the noise distribution from Gaussian to a broader family that includes Laplace, discrete Gaussian, and skellam noise, with the choice driven by the downstream accountant and by hardware constraints on random-number generation.

Composition

The primitive is designed to compose with the credentialed-contribution-attestation primitive as a strict consumer: the attestation supplies the signed bound, the admissibility class, and the contributor identity that the depth-selective router and the privacy ledger consume. The composition is one-directional in the sense that depth-selective routing requires attestation but attestation does not require routing, which permits a graceful migration in which an existing attestation-only deployment can adopt routing without re-issuing credentials.

The primitive composes downstream with the inference-time governance primitives by exporting the privacy ledger as part of the model's release manifest. Inference-time admissibility gates can then refuse to serve queries whose required generation class would draw on layers that received contribution under a privacy posture incompatible with the query's rights envelope, closing the loop between training-time privacy and inference-time rights enforcement.

Lateral composition with content-anchoring is supported by recording the canonical contribution descriptor hash in both the privacy ledger and the content anchor, so that a contribution removed from the corpus under a deletion request can be traced to all downstream training events and the privacy expenditure attributable to it can be reconciled.

Prior Art

Prior approaches to differential privacy in deep learning, including DP-SGD and its accountant variants, treat noise injection as a uniform operation over all parameters and rely on a single global privacy budget shared across the corpus. These approaches do not bind a per-contribution bound to an admissibility credential, do not restrict contribution to a depth band, and do not produce a ledger that composes with downstream rights enforcement. Layer-freezing and adapter-only fine-tuning approaches restrict updates to specific parameters but do so as an optimization choice rather than as a signed admissibility condition, and they do not compose with a per-contribution accountant.

Federated-learning approaches that aggregate gradients with secure aggregation and per-client clipping share the per-contribution-bound idea but operate at the client granularity rather than the contribution granularity and do not address depth-selective routing or composition with content-anchoring. The disclosed primitive is distinguished by the conjunction of credentialed admissibility, signed per-contribution bounds, structural depth restriction, and ledgered composition.

Disclosure Scope

The disclosure covers the admissibility-gated, depth-selective, ledgered differential-privacy training primitive in any embodiment that admits contribution under a signed per-contribution noise bound, restricts the contribution's gradient to a declared depth profile, and records the realized privacy expenditure in a tamper-evident ledger that composes with credentialed-contribution-attestation. The disclosure extends to embodiments in which the depth profile is contiguous, sparse, or adapter-bound; in which the noise distribution is Gaussian, Laplace, discrete, or otherwise calibrated; and in which the routing layer is implemented as a runtime hook or a compiled graph transformation. The disclosure does not extend to embodiments that omit the credentialed-contribution-attestation linkage, that omit the per-contribution signed bound, or that omit the depth restriction, since each of these elements is essential to the structural privacy guarantee that distinguishes the primitive from prior noise-only approaches.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01