Cryptographically-Bound Mutation Proposals for Governed Training Updates
by Nick Clark | Published March 27, 2026
In the disclosed training-governance subsystem, a training update is not applied directly to the live model. It is submitted as a cryptographically-bound mutation proposal evaluated against a frozen baseline. Each proposal carries an explicit scope descriptor, a rollback path, and a set of acceptance criteria. Only proposals that satisfy their acceptance criteria against the frozen baseline are merged into the next baseline; rejected proposals leave a signed artifact in the lineage record but do not alter the live model. This article specifies the proposal format, evaluation cycle, and disclosure surface for Cognition Patent prosecution.
Mechanism
The training-governance subsystem maintains an append-only sequence of frozen baselines. A frozen baseline is an immutable, content-addressed snapshot of model parameters together with a manifest that names the parameter-space partitions exposed for mutation, the policies in force at the snapshot's promotion, and the acceptance-criteria templates that proposals targeting this baseline must instantiate.
A mutation proposal is a structured artifact submitted by a training process. The proposal carries a parent-baseline reference (the content address of the frozen baseline against which the proposal was computed), a scope descriptor (the parameter-space partition the proposal modifies), a parameter-delta payload (the actual update, encoded as the differences against the parent baseline), an acceptance-criteria block (the metrics, evaluation datasets, and thresholds that the proposal commits to satisfy), a rollback descriptor (the procedure to revert the proposal if post-merge validation fails), and a signature binding all of the above to the submitting training-process identity.
The cryptographic binding is load-bearing. The proposal's signature covers the parent-baseline content address, the parameter-delta hash, the scope descriptor, and the acceptance-criteria block as a single signed object. This means a proposal cannot be re-targeted to a different parent without re-signature, the delta cannot be modified after signing, and the acceptance criteria the proposal commits to are not negotiable after submission. The signed object is the proposal; the live model is not touched until merge.
Evaluation is a separate phase. A proposal-evaluator instantiates the parent baseline, applies the parameter-delta within the declared scope, and runs the acceptance-criteria block: each declared metric is computed on its declared evaluation dataset and compared against its declared threshold. The evaluation is itself a content-addressed object, signed by the evaluator and bound to the proposal's signature. A proposal is admissible for merge only when an evaluation record from a trusted evaluator certifies that all acceptance criteria were satisfied.
Merge promotes an admissible proposal to a new frozen baseline. The new baseline references its parent, embeds the proposal hash and the evaluation hash in its manifest, and becomes the new target for subsequent proposals. Rollback is the inverse operation: a baseline can be reverted to its parent by a rollback transaction that follows the proposal's rollback descriptor, producing a new baseline that names the reverted state. Because baselines are append-only, rollback does not erase history; it adds a new baseline that supersedes the rolled-back one.
Operating Parameters
Scope descriptors enumerate the parameter-space partitions a proposal may touch. Partitions are declared at baseline-promotion time and may be coarse (whole-model) or fine (specific layers, specific adapter banks, specific embedding regions). A proposal whose delta touches parameters outside its declared scope is rejected at the syntactic gate before evaluation.
Acceptance-criteria blocks are composed of one or more criterion entries. Each entry names a metric (loss on a held-out set, calibration error on a probe set, behavioral conformance on a regression suite, capability bound on a red-team set), an evaluation dataset, and a threshold expressed as either an absolute bound or a delta-against-baseline bound. Thresholds may be one-sided (no worse than baseline) or two-sided (within an envelope). Multiple criteria are combined by conjunction by default; disjunctive and weighted combinations are supported as explicit composition operators.
Rollback descriptors fall into two operating classes. Class-A rollback is a parameter-revert: the rollback transaction restores the parameter-delta's inverse within the declared scope, producing a new baseline that is parameter-equivalent to the parent. Class-B rollback is a procedural rollback: the descriptor names a sequence of compensating proposals to be evaluated and merged in order, used when a proposal's effects have entangled with subsequent proposals and a clean parameter-revert is not safe.
Evaluator trust is governed by a separate policy surface. Evaluators are themselves identified by signed identities, and the baseline manifest declares the set of evaluator identities whose certifications are accepted for proposals against this baseline. Multi-evaluator policies require independent certifications from k of n evaluators before admissibility is reached. The trust policy is a parameter of the baseline, not of the individual proposal, so a proposal submitter cannot select its own evaluator.
Lineage records are produced for every proposal regardless of outcome. An accepted proposal's lineage entry references the new baseline; a rejected proposal's lineage entry references the rejection reason and the evaluation record that produced it. Lineage is queryable along the parameter axis: for any parameter region, the system can enumerate the proposals that have written to it, the evaluations that admitted them, and the rollbacks that have reverted them.
Retention policy governs how long proposal payloads are kept. Manifests, signatures, and evaluation records are retained for the lifetime of the baseline lineage. Parameter-delta payloads for rejected proposals may be retained for a shorter window sufficient to support audit and forensic review, after which the payload is purged but the signed metadata remains.
Alternative Embodiments
In a first embodiment, the parameter-delta is encoded as a dense difference against the parent baseline. This embodiment is straightforward and well-suited to small targeted updates where the delta is naturally sparse on the underlying tensor.
In a second embodiment, the parameter-delta is encoded as a low-rank decomposition or as an adapter module attached to a frozen base. The proposal carries the adapter parameters; merge installs the adapter into the new baseline's manifest without rewriting the base parameters. This embodiment supports many concurrent proposals against the same base with reduced storage and faster evaluation.
In a third embodiment, the proposal carries not a parameter-delta but a training-recipe specification: a dataset reference, an optimizer configuration, and a training-budget cap. The evaluator executes the recipe against the parent baseline within an isolated runtime, produces the resulting parameter-delta, and applies the acceptance criteria. This embodiment shifts the trust boundary from the proposer to the evaluator and is appropriate for environments where proposers cannot be trusted to compute the delta themselves.
In a fourth embodiment, acceptance criteria are augmented with policy-conformance checks drawn from the broader governance plane: data-rights conformance for the training dataset, channel-quarantine status for any interaction-derived data, and admissibility of the training process under the active training policy. A proposal that satisfies its metric criteria but fails a policy-conformance check is rejected on conformance grounds, with the rejection reason recorded in lineage.
In a fifth embodiment, proposals are organized into bundles that are evaluated and merged atomically. A bundle declares an inter-proposal acceptance criterion (for example, that the bundle as a whole improves a held-out metric while no member proposal regresses a regression suite individually). The bundle is admitted only if the bundle-level criterion is satisfied; otherwise no member is merged. This embodiment supports coordinated multi-region updates that do not make sense as isolated proposals.
Composition
Mutation proposals compose with the broader cognition-patent architecture along three seams. First, the admissibility gate that filters individual training data points feeds into the proposal layer: data points rejected by the gate cannot enter a proposal's training set, and the proposal carries a manifest of admitted-data hashes that the evaluator verifies. This closes the path where rejected data is silently absorbed into a proposal. Second, the policy engine that governs runtime capability binding is the same engine that emits the policy-conformance checks consumed by the acceptance-criteria block; a single policy change therefore takes effect both at runtime and at training-merge time without separate authoring. Third, the lineage record is a peer of the runtime audit trail and shares its query interface, so a downstream behavioral change in the live model can be traced backward to the merge that introduced it, the proposal that was merged, the evaluation that admitted the proposal, and the data that the proposal was trained on.
The proposal-baseline structure is also the load-bearing object for several higher-level features described elsewhere in the patent family: federated training (where proposals from independent submitters are evaluated against a shared baseline), continuous training (where proposals are emitted on a schedule and the baseline lineage is the model's release history), and supervised fine-tuning under rights constraints (where the acceptance-criteria block carries the rights-conformance check as a first-class criterion). Each of these features is expressed as a configuration of proposals and baselines; no new merge primitive is required.
Prior-Art Distinction
Conventional training pipelines apply gradient updates directly to a live model with no signed artifact mediating between the data and the parameters. Model-versioning systems snapshot trained models but do not constrain the path from data to parameters; the snapshot is a result, not a proposal. Reproducible-training frameworks bind training scripts to data hashes but do not separate proposal from merge, do not require evaluator-signed acceptance certifications, and do not carry a structured rollback path. Continuous-training and online-learning systems update parameters incrementally without explicit acceptance gates. The disclosed primitive differs in that the unit of training change is a signed proposal whose acceptance criteria, rollback path, and scope are bound at submission, evaluation is a separate phase performed by independently-signed evaluators against a content-addressed parent baseline, and merge produces a new baseline that supersedes the parent without erasing it.
Disclosure Scope
This disclosure covers the proposal artifact format, the cryptographic binding of parent-baseline reference, scope descriptor, parameter-delta, acceptance criteria, and rollback descriptor under a single signature, the evaluator role and the evaluator-signed certification, the merge transaction and the resulting baseline lineage, the rollback classes and their interaction with append-only lineage, and the five embodiment variants enumerated above. It also covers the composition seams with the data-admissibility gate, the policy engine, and the runtime audit trail. It does not cover the optimization algorithms used to compute parameter-deltas, which are out of scope for this disclosure, nor the underlying signature scheme, which is a substitutable cryptographic primitive selected at deployment time.