Execution Graph Manager: Structured Lineage of Agent Reasoning and Transformation

by Nick Clark | Published March 27, 2026 | PDF

The execution graph manager structures every agent computation as a directed acyclic graph (DAG) of governed steps, where each node represents a discrete, admissibility-checked transformation and each edge encodes a lineage-bound dependency between transformations. Within the cognition-native execution platform of US 19/230,933, the manager is not a logging facility appended after the fact; it is the substrate within which execution occurs. A computation that is not expressible as a cycle-free, lineage-bound graph of admissible steps cannot, by construction, run on this platform. This article describes the mechanism in white-paper depth, enumerates its operating parameters, surveys alternative embodiments, situates it relative to prior orchestration art, and defines the disclosed claim scope.


Mechanism

The execution graph manager realizes a structural invariant: every unit of agent work corresponds to exactly one node in a directed acyclic graph, and every causal relationship between units corresponds to exactly one edge. Nodes are typed by the class of transformation they perform: prompt evaluation, tool invocation, delegation to a sub-agent, mutation of a shared field, or transition between trust zones. Each node carries an admissibility predicate that must evaluate to true against the current platform state before the node is permitted to execute. The predicate is not advisory; the runtime refuses to schedule a node whose predicate fails, and it records the refusal as an immutable graph annotation so that downstream verifiers can confirm the refusal occurred for the stated reason.

Construction of the graph proceeds incrementally as the agent reasons. When an agent decides to invoke a tool, the runtime first allocates a candidate node, binds it to the lineage of the parent reasoning step through a content-addressed hash reference, and submits the candidate to the admissibility evaluator. The evaluator inspects the parent lineage, the requested transformation type, the trust zone of both the parent and the proposed target, and any policies that travel with the data the node would touch. Only after the evaluator returns an affirmative decision is the node committed to the graph and made eligible for scheduling. The commit operation is itself recorded as a graph event, so that the decision boundary between candidate and committed is auditable.

Cycle freedom is preserved structurally rather than by post hoc detection. Each node references its predecessors by hash, and a hash cannot be computed for a node until all predecessors are committed. Because hashes are content-addressed and predecessors are immutable once committed, the graph cannot acquire a back-edge: a node attempting to reference a descendant would need that descendant's hash, which does not exist until the descendant itself is committed, by which time the present node's own hash is already fixed. The DAG property is therefore not a runtime check but a consequence of the commit ordering, comparable to the way a Merkle DAG in a content-addressed store cannot, by construction, contain cycles.

Lineage binding extends beyond simple parent-child references. Each node carries a lineage envelope summarizing the full ancestral set of mutations, delegations, and zone transitions that contributed to its inputs. The envelope is computed deterministically from the predecessor envelopes and the node's own transformation, so that two independent verifiers presented with the same graph fragment will compute identical envelopes. This deterministic envelope is what allows downstream consumers, including auditors, replay engines, and fallback rehydration logic, to reason about the provenance of any node without re-executing the upstream computation.

Mutation events, delegation records, and zone transitions are first-class node types rather than side effects of other node types. A mutation node names the field being mutated, the prior committed value (by hash), the proposed new value, and the policy that authorized the change. A delegation node names the delegating agent, the delegate, the scope of authority transferred, and the expiry condition. A zone transition node names the source zone, the destination zone, and the gating policy that admitted the transition. Because these events are nodes rather than annotations, they participate in the same admissibility, cycle-freedom, and lineage-binding guarantees as compute nodes.

The manager exposes the graph through a query interface that returns sub-graphs scoped by lineage, by zone, by time range, or by transformation type. Queries are evaluated against the immutable committed graph, so query results are reproducible: the same query against the same graph state yields identical results regardless of which node executes the query. This reproducibility is the foundation on which higher-level capabilities, including replay-based verification and fallback rehydration, are built.

Operating Parameters

The graph manager is parameterized along several axes that an implementer may tune without altering the structural guarantees. The hash function used for content addressing is configurable; reference embodiments use SHA-256 and SHA3-512, but any collision-resistant function with output sufficient to make accidental collisions negligible across the expected graph size is acceptable. The admissibility predicate language is similarly configurable: embodiments include a restricted Datalog dialect, a pure-functional expression language with bounded evaluation cost, and a WebAssembly module interface for predicates supplied by trust-zone owners.

Node retention is a tunable parameter. In default operation the committed graph is retained in full; in resource-constrained embodiments, a lineage-preserving compaction may elide the bodies of nodes older than a configured horizon while retaining their hashes and envelopes. Compaction is itself a graph event, so the elision is auditable, and the elided bodies may be retrieved from cold storage if a downstream query requires them. The compaction policy is bounded so that it cannot remove a node whose hash is referenced by any uncompacted descendant.

Scheduling latency between node commit and node execution is bounded by a configurable parameter expressing the maximum time the runtime may hold an admissible committed node before dispatching it. The bound exists so that liveness can be reasoned about: an operator presented with a graph showing a committed admissible node that has not executed within the bound knows that the runtime itself is faulty rather than merely slow. The bound does not affect correctness; it affects only the operator's ability to distinguish faults from delays.

The fan-out of delegation nodes is bounded so that an agent cannot instantiate an unbounded set of delegates in a single step. The bound is per-zone configurable and is itself recorded in the graph as a policy node, so that the historical bound applicable to any past delegation is recoverable. Fan-in is unbounded by design, because consolidation of many ancestral lineages is a structural feature of agent reasoning rather than a resource hazard.

Alternative Embodiments

The graph manager admits several embodiments that vary in storage topology while preserving the structural guarantees. A single-process embodiment maintains the committed graph in a local append-only log with an in-memory index; this embodiment is suitable for edge devices and for development environments. A federated embodiment partitions the graph across nodes by lineage prefix, with cross-partition edges recorded as references that resolve through a content-addressed exchange protocol; this embodiment is suitable for multi-party deployments where no single party is trusted to hold the entire graph. A fully replicated embodiment maintains the graph on a quorum of replicas, with commits ordered by a Byzantine-fault-tolerant consensus protocol; this embodiment is suitable for adversarial settings where individual replicas may be compromised.

The admissibility evaluator may be embodied as a centralized service called by the runtime, as a sidecar co-located with each runtime instance, or as a library linked into the runtime process. The structural guarantees do not depend on the embodiment chosen, because the evaluator is treated as a pure function of the predicate, the candidate node, and the committed graph state. An evaluator that returns inconsistent results for the same inputs is itself a fault, detectable by replay against the committed graph.

Graph traversal may be embodied as eager materialization of sub-graphs into a relational store, as lazy traversal over the append-only log, or as a streaming interface that emits nodes as they match a query. Implementers select among these embodiments based on the query workload of the surrounding system, with no impact on the manager's correctness.

Embodiments differ in how they handle node bodies that exceed inline storage thresholds. Small bodies are stored inline in the graph; large bodies are stored in an external content-addressed object store, with the graph node holding only the body's hash. The structural guarantees are preserved in both cases because the hash, not the body, is what binds the node into the lineage.

Composition with Other Platform Primitives

The execution graph manager composes with the trust-zone subsystem so that every zone transition is recorded as a graph node and every admissibility check consults the zone of both the parent and the proposed transformation. A zone-violating transformation cannot be committed because its admissibility predicate fails; the failure is itself recorded as a graph annotation, so that a participant attempting to perform the violation cannot deny the attempt.

The manager composes with the policy-bound field subsystem so that mutations to governed fields appear in the graph as mutation nodes carrying the authorizing policy hash. A field mutation that is not accompanied by a valid policy reference fails admissibility; a mutation that references a revoked policy fails admissibility against the current policy state. The graph thus records not only what was mutated but under what policy, and the policy itself is referenced by hash so that historical mutations remain interpretable even after the policy has been retired.

The manager composes with the fallback rehydration subsystem so that recovery from a primary failure proceeds by traversing the committed graph from the last known-good node forward, replaying admissible transformations against the rehydrated state. Because the graph is content-addressed and the admissibility predicates are pure functions of graph state, the rehydrated execution is bit-for-bit identical to the original up to the point of divergence, and the divergence itself is recorded as a graph event.

The manager composes with the cryptographic commit subsystem so that the graph head is periodically anchored to an external durable medium. The anchor is a hash of the graph head; an external observer presented with the anchor and any sub-graph can verify that the sub-graph was part of the graph at the time of anchoring without seeing the rest of the graph. This selective disclosure property is the basis on which the platform supports audit by parties who are not entitled to see the full execution history.

Prior Art and Distinction

Conventional workflow engines, including Apache Airflow, Prefect, and Dagster, represent computations as DAGs of tasks. The execution graph manager differs from these systems in three structural respects. First, the DAG is constructed at agent reasoning time rather than authored by a developer; nodes are created as the agent decides what to compute, not declared in advance. Second, each node carries an admissibility predicate evaluated against the runtime state, where conventional workflow engines schedule any task whose dependencies are satisfied. Third, the graph itself is the durable artifact of execution; conventional engines treat the DAG as a specification and emit logs or metrics as the artifact.

Provenance systems such as PROV-DM and the W3C provenance ontology describe completed computations as directed graphs of activities, entities, and agents. The graph manager differs in that the graph is authoritative, not descriptive: there is no separate execution that the graph documents, because the graph is the execution. A computation that does not appear in the graph did not occur, and a computation that appears in the graph is, by construction, admissible.

Distributed ledger systems use content-addressed append-only structures to record state transitions. The graph manager borrows the content-addressed and append-only properties but differs in that nodes are not totally ordered into a chain; they form a partial order reflecting actual causal dependencies, which permits parallel commit of independent nodes and avoids the throughput ceiling imposed by global serialization.

Disclosure Scope

This disclosure describes and claims a method and apparatus for managing agent execution as a directed acyclic graph of admissibility-checked nodes, wherein each node is bound by content-addressed reference to its lineage, wherein cycle freedom is a consequence of commit ordering rather than a runtime check, and wherein mutation events, delegation records, and zone transitions are first-class node types subject to the same admissibility and lineage-binding guarantees as compute nodes. The disclosure further claims embodiments in which the graph is partitioned, replicated, or compacted; in which the admissibility evaluator is embodied as a service, sidecar, or library; and in which the graph head is cryptographically anchored to an external durable medium for selective disclosure to non-privileged auditors. The scope of the disclosure is defined by the claims of US 19/230,933 and is not limited by any specific embodiment, language, hash function, predicate dialect, or storage topology described herein.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01