Branch Classification System

by Nick Clark | Published March 27, 2026 | PDF

Forecast branches produced by the planning graph are classified at construction by their epistemic character — counterfactual, extrapolation, corroborating, or contradictory — and the assigned class deterministically governs each branch's admissibility for downstream policy decisions. Classification is structural, recorded in the branch's canonical fields, and immutable across the branch's lifecycle.


Mechanism

The Branch Classification System is defined in Chapter 4 of the cognition patent as a structural component of the forecasting engine. Every branch admitted to the planning graph carries a classification tag drawn from a closed taxonomy. The four primary classes are counterfactual, extrapolation, corroborating, and contradictory, and each classification reflects the relationship between the branch's premise and the agent's currently verified state at the moment the branch was constructed.

A counterfactual branch is one whose root premise inverts or substitutes a verified observation: it asks what would follow if a known fact were otherwise. An extrapolation branch projects forward from verified state along a hypothesized trajectory whose generative model is itself uncertain; its premise is consistent with verified state but its forward dynamics are not. A corroborating branch is constructed to test whether an existing belief or commitment continues to hold under newly observed conditions; its purpose is verification, and its outcome adds confidence to or removes confidence from the parent belief. A contradictory branch is one constructed because an observation has surfaced that conflicts with verified state; its purpose is to localize and resolve the contradiction.

Classification is performed deterministically at branch construction time by an evaluation function that inspects the branch's premise, its inputs, and its relationship to the verified memory region. The classifier does not infer the class from later content; the class is fixed at construction and recorded in the branch's canonical fields. Subsequent operations on the branch — expansion, evaluation, scoring, promotion, pruning, dormancy — read the class to determine admissibility, but they cannot mutate it. If the epistemic character of a line of reasoning changes, a new branch is constructed with the appropriate class and the original is closed; the classification itself is never overwritten.

The class governs admissibility through policy-defined rules that are evaluated whenever a downstream subsystem requests branches for a particular purpose. A request from the action policy for branches to inform an imminent commitment will admit corroborating branches and, depending on configuration, extrapolation branches above a confidence floor, while excluding counterfactual branches outright. A request from the inquiry subsystem for branches to formulate a clarifying question may invert this admissibility profile, preferring counterfactuals and contradictories. The same branch graph is therefore used differently by different consumers, with the classification serving as the structural key that controls each consumer's view.

Because admissibility is a function of class and consumer rather than of branch identity, the system supports per-purpose views over a single underlying graph without copying or partitioning the graph itself. Two consumers viewing the same graph at the same moment may see disjoint subsets of branches; the consumers do not need to be aware of one another, and the policy that governs each view is independently specifiable. This separation between graph structure and consumer view is the structural property that allows speculative reasoning to coexist with commitment-grade reasoning in the same forecasting engine without leakage from one to the other.

Operating Parameters

The classifier accepts inputs drawn from the branch's construction context: the premise expression, the references it makes into verified memory and observation memory, and the construction reason supplied by the caller. The classifier emits a class label, a justification record citing the canonical fields it consulted, and a confidence-of-classification scalar that is distinct from the branch's content confidence and is used to flag ambiguous constructions for review.

Admissibility rules are expressed in the policy reference and are keyed by the requesting subsystem and the request purpose. A policy may, for example, admit only corroborating branches into the action policy's view by default, admit extrapolation branches when their content confidence exceeds a configurable threshold, and admit counterfactuals only when the inquiry subsystem is the consumer. The thresholds and per-class admissibility flags are deployment-tunable without modification of the classifier or the planning graph.

Lifecycle parameters per class control how branches age. Counterfactual branches typically have shorter retention windows because their value is highest near the moment of construction. Corroborating branches are retained until the parent belief is either reaffirmed or demoted. Contradictory branches are retained until the contradiction is resolved by either accepting the new observation and revising verified state or rejecting it and recording the rejection rationale. Extrapolation branches are subject to a freshness clock relative to the trajectory's origin observation, and become dormant when the clock expires.

The classifier's confidence-of-classification scalar is itself an operating parameter. Branches whose classification confidence falls below a configured threshold are routed to a review queue rather than admitted directly into the planning graph; they may be re-examined by a higher-resolution classifier, escalated to inquiry, or rejected with a recorded justification. This second-order parameter prevents ambiguous epistemic characters from being silently committed to a class label that subsequent consumers will treat as authoritative. The threshold is tunable per deployment and per consumer, allowing strict environments to demand high classification confidence while permissive environments accept lower-confidence labels with the understanding that admissibility rules downstream will provide additional filtering.

Alternative Embodiments

The taxonomy may be extended in deployment-specific embodiments to include subclasses, for example partitioning extrapolation into short-horizon and long-horizon variants, or partitioning counterfactual into intervention and observation variants. Subclasses inherit the admissibility profile of their parent class by default and may override it through policy. Alternative classifier embodiments include rule-based classifiers that pattern-match on the premise structure, learned classifiers that operate over featurized premises with calibrated confidence, and hybrid classifiers that use rules to assign the class and a learned model to assign the classification confidence. In every embodiment, the class itself remains a discrete value drawn from the closed taxonomy, recorded immutably in canonical fields.

Alternative consumer embodiments include not only the action policy and inquiry subsystem but also explanation generators, training-data curators, and external auditors. Each consumer presents an admissibility request, and the same branch graph yields a class-filtered view appropriate to that consumer's role. In multi-agent deployments, the classification travels with the branch when shared between agents, allowing peer agents to apply their own admissibility policies without re-classifying.

Further alternative embodiments include classification at multiple resolutions: a coarse class assigned synchronously at construction and a refined subclass assigned asynchronously by a downstream classifier. In such embodiments, admissibility rules may key on either the coarse class or the refined subclass, with the asynchronous refinement producing a recorded reclassification event that is itself part of lineage. The branch's original class remains immutable; the refinement is a separate canonical field that supplements rather than replaces the construction-time label.

Composition With Other Mechanisms

Branch classification composes directly with memory separation: counterfactual and extrapolation branches reside in forecast memory and never cross into verified memory until promotion, while corroborating outcomes are the canonical input to belief reaffirmation in verified memory, and contradictory outcomes are the canonical trigger for revision. The class is therefore the structural variable that drives cross-region traffic, and the memory separation mechanism enforces the constraints that the class implies.

Classification also composes with confidence governance. A confidence collapse trigger may be defined over the rate at which contradictory branches are generated, or over the disparity between corroborating and contradictory volume. Non-Executing Mode may restrict the forecasting engine to corroborating and contradictory branches only, suppressing counterfactual and extrapolation production while trust is being restored. The classification is the field that makes such policies expressible. Because classes are immutable canonical fields, policies that key on class are themselves replayable: given a recorded lineage of branch construction events, the set of branches admissible to any consumer at any moment is reconstructable without re-running the original computations, which is the property that enables after-the-fact audit of admissibility decisions.

Distinction From Prior Art

Prior planning systems generally treat all forward simulations as homogeneous nodes in a search tree, distinguished by score but not by epistemic character. Prior counterfactual reasoning systems treat counterfactuals as a separate computation, not as one class among several within a unified planning graph. Prior contradiction-handling systems are reactive: they detect a contradiction and trigger belief revision, but do not represent the contradictory line of reasoning as a first-class branch with admissibility constraints. Prior corroboration systems are typically embedded inside specific verifiers and are not exposed as a structural class in a general planning substrate.

The Branch Classification System is distinct because it unifies these characters into a single closed taxonomy applied at construction, makes the class immutable and canonical, and delegates admissibility to policy rather than burying it in consumer code. The result is that every consumer of the planning graph sees a class-filtered view consistent with its role, and the system as a whole exhibits provable separation between the kinds of reasoning that may inform commitment and the kinds that may not.

Disclosure Scope

The disclosure encompasses any forecasting engine that classifies branches at construction by epistemic character, records the classification immutably in canonical fields, and uses the classification to drive policy-defined admissibility for downstream consumers. The disclosure covers the four named classes and any subclass refinement, any classifier embodiment that produces a discrete class label and a justification record, and any policy language that expresses per-class admissibility rules.

Because classification is policy-governed and deterministic, it can be formally analyzed, audited, and certified. Different domains tune the per-class admissibility and lifecycle parameters through policy configuration without architectural change, making the same structural capability applicable to autonomous vehicles, companion AI, therapeutic agents, and enterprise systems.

The disclosure additionally encompasses methods of certifying class-correct admissibility through replay of recorded lineage, methods of detecting classifier drift over time by measuring the distribution of classification-confidence scores, and methods of evolving the taxonomy in a backward-compatible manner by introducing subclasses that inherit admissibility from their parent class. The disclosure includes the use of class-keyed admissibility as a structural barrier between speculative and commitment-grade reasoning in a unified planning substrate, which is the property from which the system's safety and auditability properties follow. The disclosure also covers embodiments in which classification labels are exposed to external auditors as part of the agent's published lineage, allowing third parties to verify that commitments were drawn only from admissible classes without requiring access to the agent's internal planning state.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01