Integrity-Constrained Forecasting
by Nick Clark | Published March 27, 2026
The Cognition Patent specifies a forecasting engine in which every forecast is constrained to an explicit integrity envelope: a declared region of input space within which the model is authorized to produce extrapolation, a confidence interval that must accompany every output, and a set of audit-required regions where forecasts may only be produced under elevated oversight. Inputs that fall outside the envelope do not yield silent best-effort predictions. They yield a structurally marked refusal, recorded in the agent's lineage, that downstream consumers can route, escalate, or audit rather than treating as a fact.
Mechanism
The forecasting engine wraps every model invocation in a three-part envelope check. The first part is the extrapolation bound: a declared region of the model's input space within which the model is permitted to produce a forecast. The bound is expressed declaratively as a set of constraints over the input fields, derived from the training distribution and from the operator's policy about how far beyond that distribution the model may be trusted. An input that satisfies the bound is admitted; an input that violates it is rejected at the gate, before the model is invoked.
The second part is the confidence interval. Every forecast produced by the engine is accompanied by a quantified confidence interval computed from the model's calibration data. The interval is not optional metadata; it is a structural component of the forecast object, and downstream consumers that ignore it are violating the engine's contract. The interval is produced by a calibration function declared as part of the model's manifest, so that the relationship between point estimate and interval is reproducible and auditable rather than implementation-dependent.
The third part is the audit-required region. Some sub-regions of the admitted input space are flagged as sensitive: inputs that are admissible but whose forecasts have material consequences, regulatory exposure, or known historical instability. A forecast for an audit-required input is produced, but the forecast object is marked, and the engine emits an audit event describing the input, the forecast, and the responsible principal. The audit event is a first-class lineage record, not a log entry; it can be subscribed to, queued for human review, or routed to an external compliance system.
Out-of-envelope inputs are the central case. When an input violates the extrapolation bound, the engine does not produce a forecast. It produces a structurally marked refusal: an object with the same schema as a forecast, but carrying a sentinel value in the prediction field and a structured reason in the envelope-violation field. Downstream code is required to handle the refusal explicitly; it cannot accidentally consume it as a prediction because the type discipline forbids it. This is the operational meaning of "marked, not produced": the absence of a forecast is itself a structured signal that propagates through the planning graph.
Operating Parameters
The envelope is parameterized for each deployed model through declarative policy. The extrapolation tolerance specifies how far beyond the training distribution the bound is permitted to extend, expressed in calibrated distance units. A tightly bounded model (low tolerance) refuses inputs that are even modestly out-of-distribution; a loosely bounded model accepts wider extrapolation. Operators choose tolerance based on the cost of a wrong forecast versus the cost of a refused one.
The confidence threshold specifies the minimum confidence required for a forecast to be returned without elevation. Forecasts whose interval exceeds the threshold are still produced but are routed through the audit channel. This converts the confidence interval from a passive annotation into an active control: low-confidence forecasts cannot reach unmonitored consumers.
The audit-region specification is itself a declarative policy artifact, versioned alongside the model manifest. Regions can be declared by input-field constraints, by historical-incident replay, or by external regulatory reference. The specification is reviewed and approved as a discrete artifact, so that the question "which inputs require human review for this model" has an explicit, citable answer.
The refusal handling policy declared by each consumer of forecasts specifies what the consumer does when it receives a marked refusal. Typical policies include falling back to a more conservative model, escalating to a human operator, deferring the decision, or aborting the enclosing plan. The policy is declared in advance, so that out-of-envelope conditions never produce ad-hoc behavior in the consumer.
The envelope revision discipline governs how the bound itself changes over time. The envelope is not a static artifact; as the model is retrained, recalibrated, or exposed to new operating conditions, the bound may legitimately widen or narrow. Each revision is a typed mutation against the model's manifest, signed by an authorized principal, validated against the model's calibration evidence, and recorded in the lineage. The revision discipline prevents silent envelope drift, in which a model gradually accepts inputs it was never validated against because of accumulated small adjustments that no one reviewed in aggregate. Reviewers can ask, at any time, when the bound last moved, who moved it, and what evidence justified the move.
The shadow-evaluation parameter permits a model to evaluate out-of-envelope inputs in a non-binding mode, recording its predictions for later review without exposing them to consumers. Shadow evaluation is the mechanism by which an operator gathers evidence to support a future envelope widening: candidate inputs accumulate, their shadow forecasts are compared against ground truth as it becomes available, and the calibration is updated. The shadow channel is structurally separate from the production channel, with its own governance binding, so that shadow predictions cannot leak into operational use.
Alternative Embodiments
In a regulated-decision embodiment, the envelope is tuned for compliance: extrapolation tolerance is tight, audit regions are aligned with regulatory categories, and refusal handling routes to a human reviewer whose decision is itself recorded. This embodiment is appropriate for medical, financial, and legal forecasting where regulators require traceable decision rationale and explicit handling of edge cases.
In an autonomous-vehicle embodiment, the envelope is tuned for safety: the extrapolation bound is anchored to scenarios validated in simulation and on-road testing, audit regions cover known difficult conditions (weather, construction, unusual traffic), and refusal handling triggers a fallback driving policy. The forecast object's confidence interval is consumed by downstream planning to widen safety margins under uncertainty.
In a companion-AI embodiment, the envelope is tuned for boundary-respecting interaction: forecasts about user intent are constrained to regions consistent with the user's declared preferences, audit regions cover sensitive emotional or relational topics, and refusals trigger explicit acknowledgment rather than confabulation. The mechanism's contribution in this embodiment is structural avoidance of hallucinated user models.
In an enterprise-forecasting embodiment, the envelope is tuned for operational discipline: extrapolation tolerance is set per business question, audit regions align with material-decision thresholds, and refusal handling is integrated with the operator's existing decision-review workflows. The engine's contribution is making forecast-driven decisions auditable at the level of which inputs were admitted, which were refused, and which were elevated.
In a therapeutic-agent embodiment, the envelope is tuned for clinical defensibility: extrapolation bounds are anchored to the populations and conditions on which the model was validated, audit regions cover known sensitive presentations, and refusal handling triggers explicit handoff to a credentialed clinician. The lineage of envelope events forms part of the clinical record, supporting both clinician oversight and post-hoc review by quality-assurance and regulatory bodies. In a multi-agent coordination embodiment, the envelope governs forecasts that one agent produces about another agent's behavior or about shared environmental state; out-of-envelope refusals become structural signals that coordination must fall back to a more conservative protocol rather than proceeding on speculation.
Composition With Other Cognition Primitives
The integrity envelope composes with the substrate-deployment construction described in the companion disclosure: the envelope is part of the model's manifest, so a model deployed across centralized, federated, or edge tiers carries the same envelope wherever it runs. An edge replica cannot relax the bound merely because connectivity to a coordination plane is intermittent; the bound travels with the model, and any local revision is still a typed mutation that must reconcile with the canonical lineage when connectivity returns. The envelope composes with the planning graph: refusal objects propagate as first-class branch inputs, allowing planning to react structurally to absence of forecast. The envelope composes with the agent's lineage: every admit, refuse, and audit event is recorded, so that the question "did the agent make this decision under valid forecast" has a definite answer at any later time. The envelope composes with the agent's policy reference: tolerance, threshold, audit-region specification, and refusal handling are all declarative policy artifacts subject to the same versioning and review discipline as the rest of the agent's declared state.
Prior-Art Distinction
Conventional forecasting systems produce point estimates with optional confidence metadata that consumers may or may not honor. Out-of-distribution detection, where present, is typically a separate diagnostic layer whose output is logged rather than structurally enforced; consumers continue to receive a point estimate even when the detector flags the input. Calibrated-prediction systems produce intervals but do not gate invocation on envelope membership. Active-learning and human-in-the-loop systems route uncertain cases to human review, but typically through ad-hoc plumbing rather than as a structural property of the forecast object. None of these systems unify extrapolation gating, calibrated intervals, audit-region routing, and structurally marked refusals into a single object that downstream code is type-required to handle. The Cognition Patent's contribution is the unification: the envelope is the forecast's structural contract, not a wrapper around an unchanged primitive.
Disclosure Scope
This disclosure covers forecasting engines that constrain every invocation to a declared integrity envelope comprising bounded extrapolation, calibrated confidence interval, and audit-required regions. It covers the production of structurally marked refusals for out-of-envelope inputs and their propagation as first-class objects through downstream planning. It covers the parameterization of the envelope by extrapolation tolerance, confidence threshold, audit-region specification, and refusal handling policy. It covers regulated-decision, autonomous-vehicle, companion-AI, and enterprise-forecasting embodiments. It does not cover the internal training or calibration algorithms used to produce the model parameters or the calibration function, which are independent art. The construction is claimed at the level of the envelope's structural commitments: any forecasting deployment in which inputs are gated against a declared bound, forecasts carry a structurally enforced interval, sensitive regions emit credentialed audit events, and out-of-bound conditions yield typed refusals rather than silent predictions falls within the disclosure regardless of the underlying model class.