Temporal Executability Forecasting
by Nick Clark | Published March 27, 2026
Capability awareness is not only a snapshot of what an agent can do now; the cognition patent specifies that an agent must also estimate how its capability will evolve under the actions it is about to commit to. Temporal executability forecasting is the pre-action self-prediction step that produces an envelope trajectory, evaluates the planned task against that trajectory, and either admits or rejects the plan before execution begins. The forecast is structured, named, versioned, and recorded in lineage alongside the plan it justified.
Mechanism
The forecaster takes three structured inputs: the current envelope at its named version, the planned action sequence as a sequence of canonical action records, and a model reference that maps action records to predicted envelope deltas. The model reference is itself versioned and may be a calibrated physical model, a learned regression, a tabulated empirical decay curve, or a composition of these; the cognitive layer treats the model uniformly because all model classes export the same delta schema. The output is a trajectory: a sequence of predicted envelope states, each tagged with the action that produced it and a calibrated uncertainty band.
Evaluation proceeds dimension by dimension and step by step. At each step the planned action's required capability tuple is compared against the predicted envelope at that step, including the uncertainty band. A step is admissible if every required dimension lies within the predicted envelope at the chosen confidence level; the plan is admissible if every step is admissible. The verdict is a structured record naming the trajectory hash, the confidence level, the per-step outcomes, and the first step (if any) at which admissibility failed. A failed forecast does not silently degrade the plan; it returns a precise reason that the planner can use to revise.
The forecaster is a pre-commitment instrument. It runs before any action is dispatched, and its verdict is a precondition for entering the commitment stage. Once execution begins, the forecaster continues to run in a monitoring mode: as each action completes, the observed envelope is compared to the predicted envelope, and the residual is fed back into the model reference to update its calibration. Significant residuals trigger plan re-evaluation; persistent residuals trigger model-version revision. Both events are structural, not heuristic, and both are logged.
The trajectory record is treated as immutable once emitted. If a plan needs to be revised in light of new information, a fresh forecast is produced and the new trajectory is recorded alongside the old one with an explicit supersession reference. This append-only discipline is what permits faithful replay: a reviewer reconstructing a sequence of decisions can see not only the trajectory that justified the final action but also the earlier trajectories that were considered and superseded, the events that triggered each supersession, and the model-reference versions that were active at each step.
Forecast generation is itself a recorded computation. The forecaster cites the model-reference version, the envelope version, the action sequence hash, and the policy parameters in effect; given these inputs the forecast is reproducible. A reviewer who suspects a forecast was wrong can re-run the forecaster against the recorded inputs and verify that the same trajectory is produced, isolating the question of model fidelity from the question of system integrity.
Operating Parameters
The horizon parameter sets how far into the future the forecast extends. Short horizons reduce uncertainty accumulation but may admit plans whose late-stage infeasibility is invisible. Long horizons expose more failure modes but require more confident models. The cognition patent treats horizon as a policy-declared value tied to the task class, not a global constant.
The confidence parameter determines the required coverage of the uncertainty band. A safety-critical task may require that the predicted envelope contain the required capability at the 99th percentile of the band; a low-stakes task may admit at the 80th percentile. Confidence is dimension-specific: a plan may be admitted at high confidence on reach and lower confidence on duty-cycle thermals if policy permits.
The granularity parameter controls how finely the trajectory is sampled. Coarse trajectories are cheaper but can miss transient infeasibility between sampled points; fine trajectories cost more compute but expose narrow windows in which the envelope dips below the required capability. The model reference declares which dimensions are smooth (and tolerate coarse sampling) and which can change discontinuously (and demand fine sampling near events).
A model-trust parameter governs how aggressively the forecaster down-weights its predictions when the residual log shows recent miscalibration. A persistently miscalibrated model is treated as less authoritative until a recalibration event closes the residuals; this prevents an over-confident model from continuing to admit plans that observed reality is rejecting.
A fallback parameter declares what the cognitive layer is to do when the forecaster fails to admit any candidate plan within a deadline. Options include declaring the task currently infeasible, deferring to an operator, opening a negotiation with a peer for additional capability, or emitting a request for preconditioning. Each fallback is itself a structured action recorded in lineage, and the policy declaration of which fallback applies in which context is versioned and auditable.
A revision-cadence parameter governs how often the forecaster re-runs while a plan is in flight. High cadence catches in-flight infeasibility quickly at the cost of compute; low cadence is cheaper but may admit windows in which the agent operates under an obsolete trajectory. The cadence is policy-declared per task class, recognizing that a long-horizon logistics task and a fast-cycle manipulation task have different optimal revision intervals.
Alternative Embodiments
In an embodied robotic agent the forecaster predicts battery, thermal, joint-wear, and tool-degradation trajectories. In a software agent it predicts API quota consumption, credential expiry windows, rate-limit head-room, and downstream-service latency drift. In a vehicle agent it predicts fuel or charge state, tire wear, and weather-driven sensor degradation. The structural form of the forecaster is identical across embodiments; only the model reference and the dimension set differ.
In therapeutic and companion-AI agents the forecaster predicts dimensions that are not physical at all: the projected coherence of a long conversation under a specified context budget, the projected drift of an agent's persona under a planned interaction style, or the projected divergence of an agent's recommendations from a declared safety policy. These dimensions follow the same admissibility procedure as physical capabilities, with the model reference supplying the predictions and the policy supplying the confidence requirements. Treating non-physical capabilities under the same forecaster is a deliberate design choice: it ensures that all pre-action self-prediction obeys the same audit and lineage discipline regardless of domain.
Forecasts can be probabilistic rather than scalar. A probabilistic forecast emits a distribution over envelope trajectories, and the admissibility check becomes a probability-of-admission computation. Probabilistic forecasts compose naturally with risk-aware policies that allow some non-zero probability of in-flight infeasibility provided a structured fallback is in place.
Forecasts can be conditional on negotiations. A predicted envelope trajectory may presume a grant of contested scope from a peer; if the negotiation does not produce that grant, the conditional trajectory is invalidated and a different trajectory must be forecast. The conditional dependence is represented in the trajectory record, so reviewers can see exactly which negotiations a given plan depended on.
Forecasts can be shared. An agent may publish projections of its forecast to peers, supporting predictive negotiation and cell-level coordination. The published projection carries the trajectory hash, the model-reference version, and a calibrated uncertainty, enough for a peer to reason about the forecast without needing the underlying model.
Forecasts can be ensemble. Multiple model references can be evaluated in parallel against the same plan, and the admissibility verdict can be computed under a declared aggregation rule, such as unanimity, majority, or worst-case envelope intersection. Ensemble forecasting is useful when a single model class is not trusted to capture all the relevant decay or growth dynamics, and it is structurally identical to single-model forecasting except for the aggregation step, which is itself a policy-declared and recorded computation.
Forecasts can be counterfactual. The same forecaster can be invoked against a hypothetical action sequence the agent does not intend to execute, producing a comparative trajectory used to evaluate alternatives at planning time. Counterfactual forecasts are tagged as such in their records so they cannot be confused with admissibility forecasts attached to actual plans, but they share the schema, the model references, and the audit properties of their committed counterparts.
Composition with Other Mechanisms
Temporal forecasting depends on embodied envelopes for the snapshot it extends and on negotiation for the conditional grants its trajectories may presume. It composes with the lineage system by emitting trajectory records that plans cite, model-version records that trajectories cite, and residual records that calibration events cite. A complete decision audit traces from outcome through plan, through trajectory, through model, through envelope, all under a single canonical schema.
Forecasting also closes the loop on regulatory accountability. A regulator examining a deployment can verify that no plan was committed without an admitted forecast, that no admitted forecast was produced by a model whose recent residuals exceeded declared trust, and that every observed in-flight infeasibility corresponds to a logged residual that triggered the appropriate response. These are structural properties of the lineage rather than statistical claims about behavior.
Forecasting composes with policy governance more broadly. The policy reference declares not only the parameters of forecasting but also the conditions under which a forecast's authority can be overridden, by whom, and with what recorded justification. An operator override of a rejected plan, for instance, is admitted only when policy permits and produces a structured override record citing the rejected forecast, the overriding authority, and the rationale. The override does not modify the forecast; it is a parallel record that takes responsibility for the deviation, preserving the integrity of the forecast log itself.
Distinction from Prior Art
Predictive maintenance systems forecast component health but do not feed the forecast into pre-action admissibility. Model-predictive controllers forecast state trajectories but operate at the control layer, below the plan-commitment boundary, and do not produce structured admissibility records the cognitive layer can cite. Reinforcement-learning agents may learn to anticipate failure but do so implicitly, in policy weights, and cannot expose a named, versioned, auditable trajectory for review.
The forecaster disclosed here is structurally different. It is a pre-commitment instrument that emits versioned trajectory records, cites a versioned model reference, evaluates admissibility under a declared confidence policy, and feeds residuals back into model trust through a logged process. Forecasts are objects in the cognitive layer, not behaviors of a controller, and their properties are checkable from the lineage rather than inferred from outcomes.
Disclosure Scope
This article discloses the inputs, outputs, evaluation procedure, parameter set, monitoring loop, and composition properties of temporal executability forecasting as defined in Chapter 6 of the cognition patent. It covers scalar, probabilistic, conditional, and shared forecast forms, and identifies the integration points with envelopes, negotiation, and lineage. Specific model classes, calibration schedules, and operator-tunable confidence schedules are reserved for licensee implementation guidance and are not part of this public disclosure.