Cross-Topology Substrate Deployment: Identical Agent Structure Across All Substrates

by Nick Clark | Published March 27, 2026 | PDF

Cross-topology substrate deployment is the property by which workloads expressed in the cognition-native execution platform run across distinct topology classes — clusters, fleets, meshes, and hybrids — without rewriting agent code, redefining state contracts, or surrendering audit and governance guarantees at topology boundaries. Each topology class imposes its own physical and operational constraints: clusters are tightly coupled groups of nodes under a single scheduler; fleets are loosely coordinated populations of like nodes with intermittent connectivity; meshes are peer-to-peer overlays with no central scheduler; hybrids combine the above. The execution platform abstracts these differences into a uniform field structure that every agent presents identically on every substrate, while routing decisions that cross topology boundaries are mediated by semantic routing rather than address-bound propagation, and per-class governance ensures that each substrate's policy regime is honored where the workload runs. The result is a workload that is simultaneously portable, auditable, and governed, as described in US 19/230,933.


Mechanism

The mechanism rests on three architectural commitments. First, every agent in the execution platform exposes an identical field structure regardless of the substrate on which it runs. The field structure consists of a stable set of named, typed slots — identity, capability envelope, state record, lineage chain, policy descriptor, and trust-zone assertion — that downstream protocol components rely on. Substrate-specific code is confined to a narrow adapter layer beneath the field structure, so that an agent migrated from a cluster substrate to an edge substrate retains the same slots with the same semantics, even though the underlying transport, scheduling, and storage are wholly different.

Second, routing between agents is semantic rather than address-bound. A request originating in one agent and destined for another does not name a network address or a physical location; it names a semantic target — a capability description, a policy constraint, a trust-zone requirement — and the platform resolves the target to a concrete agent at delivery time. Resolution consults a distributed semantic directory that aggregates agent capability advertisements across all participating substrates. When a target spans topologies, the directory returns a path that traverses the appropriate boundary gateways, each of which performs a topology-class-specific routing operation.

Third, every cross-topology hop is recorded in a tamper-evident audit chain. The chain includes the originating substrate descriptor, the target substrate descriptor, the boundary gateway identity, the policies evaluated at the boundary, and the cryptographic chain head from each side. A workload that touches multiple topology classes during its execution produces a chain that records every boundary crossing, supporting after-the-fact audit of which substrates participated in which steps of the workload.

Per-class governance is enforced by attaching a substrate-class policy regime to each topology and requiring every agent execution and every boundary crossing to evaluate the relevant regime in addition to the workload's own policies. Cluster-class regimes typically express tenancy, quota, and scheduler-priority rules; fleet-class regimes express version-skew and connectivity-loss tolerance rules; mesh-class regimes express peer-trust and gossip-rate rules; hybrid regimes are compositions. Evaluation outcomes are recorded in the audit chain, so a workload that runs successfully under cluster governance but is denied at a mesh boundary produces an audit-visible denial that the workload owner and the substrate operators can both inspect.

Operating Parameters

The field structure is parameterized by the slot schema version. Embodiments use a single canonical schema with reserved extension points for future slots, or admit per-deployment schema variants with explicit migration paths between versions. The platform refuses to admit an agent whose slot schema is not registered with a translator chain back to the canonical schema, ensuring that semantic routing remains well-typed across topology boundaries.

The semantic routing directory is parameterized by aggregation scope and consistency model. Embodiments include strongly consistent global directories backed by Raft or Paxos, eventually consistent gossip-based directories suited to fleet and mesh deployments, hierarchical directories that aggregate per-cluster views into a global view, and federated directories that maintain sovereign per-zone views with cross-zone resolution at boundary gateways. The choice trades resolution latency, partition tolerance, and operator sovereignty.

Boundary gateways are parameterized by their gateway class and their policy regime. Cluster-to-fleet gateways typically perform batching, version negotiation, and connectivity-loss buffering. Cluster-to-mesh gateways typically perform peer authentication, gossip-rate shaping, and mesh-side credential issuance. Mesh-to-mesh gateways perform inter-zone trust translation. Hybrid gateways compose the above. Each gateway exposes a policy evaluation point at which the per-class governance regime is applied.

The audit chain is parameterized by chain construction and aggregation cadence. Embodiments use Merkle-style chains rooted at the agent's home substrate, accumulator-style constructions that admit constant-size membership proofs, and federated chains that maintain per-substrate roots with periodic checkpoint exchange. The cadence at which roots are exchanged across substrates is a tunable parameter that trades audit freshness against cross-substrate coordination overhead.

Governance regime composition is parameterized by composition policy. Embodiments include strict-conjunction composition in which every applicable regime must permit an action for it to proceed, priority-ordered composition in which a higher-class regime overrides a lower-class regime where they conflict, and explicit-veto composition in which any regime may unilaterally deny but only a designated regime may unilaterally permit. The composition policy is itself a versioned object bound to the workload at scheduling time.

Migration parameters control how an agent moves between substrates. Embodiments include hot migration with state checkpointing, warm migration with state snapshot and brief downtime, and cold migration with explicit teardown and reinitialization. The migration policy specifies which slots travel with the agent unchanged, which are recomputed on the destination substrate, and which are revoked across the boundary.

Alternative Embodiments

A first alternative embodiment deploys a workload across a cluster of GPU-equipped training nodes (cluster topology), a fleet of edge inference devices (fleet topology), and a peer-to-peer mesh of partner organizations (mesh topology). Training agents on the cluster produce model artifacts that are anchored and routed semantically to inference agents on the fleet, where they are cached locally and consulted under fleet-class governance; partner-organization agents on the mesh consume inference results under mesh-class peer-trust rules. The same agent definitions run on all three substrates without rewriting, and the audit chain records every cross-topology hop.

A second alternative embodiment is a multi-tenant SaaS deployment in which tenant workloads are statically partitioned across cluster substrates by region but draw on shared mesh-resident knowledge graphs and federated data substrates. Each tenant's agents present the canonical field structure regardless of which substrate they currently occupy; semantic routing carries each request through the appropriate boundary gateway to enforce regional, tenant, and shared-resource policy regimes in composition.

A third alternative embodiment is a regulated multi-jurisdictional deployment in which each jurisdiction operates its own substrate under its own governance regime. The platform's sovereignty primitives ensure that data and computations subject to jurisdictional residency requirements never traverse a boundary that the residency regime forbids; semantic routing returns a path that respects residency, and where no compliant path exists the request fails closed with an audit-visible denial.

A fourth alternative embodiment is a mobile or vehicular fleet in which agents migrate between clusters, fleets, and meshes as connectivity changes. A vehicle in a depot is on the cluster substrate; in transit it is on the fleet substrate; when it joins an ad-hoc peer group of nearby vehicles it is on the mesh substrate. The same agent persists across all transitions; only the substrate adapter beneath the field structure changes.

A fifth alternative embodiment integrates the cross-topology mechanism with content anchoring such that artifacts consulted by agents are themselves portable across topology boundaries with structural identity preserved. Anchored artifacts produced on the cluster substrate are consulted on the mesh substrate without re-anchoring and without trusting the mesh substrate's bookkeeping; the chain provides cross-topology audit for both the agent execution and the artifact consultation.

A sixth alternative embodiment is a development-to-production progression in which the same workload runs unchanged on a developer laptop (degenerate single-node cluster), in a staging cluster (cluster topology), in a canary fleet (fleet topology), and in a production hybrid (hybrid topology). The progression preserves every guarantee — identity, audit, governance — across each promotion without requiring environment-specific rewriting.

Composition

The mechanism is composed of seven cooperating components. The first is the field-structure schema and its translator chain, which defines the canonical agent slot set and admits versioned variants. The second is the substrate adapter layer, a per-topology shim that maps the field-structure operations onto the topology's native primitives — scheduler API for clusters, gossip and connectivity primitives for fleets, peer protocol for meshes, and composition for hybrids.

The third component is the semantic routing directory, which aggregates agent capability advertisements and resolves semantic targets to concrete agents. The fourth is the boundary gateway, which implements topology-class-specific cross-substrate routing and exposes a policy evaluation point. The fifth is the per-class governance regime engine, which evaluates the applicable regime against an action and produces an outcome.

The sixth component is the audit chain manager, which maintains the per-substrate accumulator and exchanges checkpoints across boundaries. The seventh is the migration controller, which executes hot, warm, or cold migration of agents between substrates while preserving slot semantics and producing a migration record in the audit chain.

These components are connected by stable, content-addressed interfaces. The field-structure schema is referenced by every component; the adapter, gateway, and migration controller are substrate-specific in their lower halves but expose substrate-agnostic upper interfaces; the directory, regime engine, and chain manager are substrate-agnostic. This decomposition admits substrate-specific optimization beneath the field structure without altering the cross-topology guarantees the system as a whole provides.

Prior Art

Container orchestration systems such as Kubernetes provide a uniform scheduling abstraction over a cluster of nodes but do not extend across topology classes. Federations of clusters require additional layers (cluster federation, multicluster service mesh) that re-introduce substrate-specific concerns at the federation seams. The cross-topology mechanism described here is not an orchestrator but an agent-structure abstraction that is preserved by semantic routing across heterogeneous substrates regardless of which orchestrator manages each one.

Service meshes such as Istio and Linkerd provide uniform traffic management within a cluster and, with extensions, across federated clusters. They route by network identity (service name, mTLS identity) rather than by semantic capability, and they do not natively address fleet or mesh topologies in which scheduler-controlled service identity is not available. The semantic routing primitive here resolves capability-described targets to concrete agents at delivery time without requiring scheduler-controlled identity.

Edge computing frameworks such as KubeEdge or AWS Greengrass extend cluster orchestration toward the edge but typically partition the workload between cloud and edge halves whose definitions diverge. The cross-topology mechanism preserves identical agent definitions across cloud, edge, and intermediate substrates with the substrate-specific concerns confined to the adapter layer.

Peer-to-peer overlay systems such as IPFS and libp2p provide mesh topology primitives but do not natively integrate with cluster or fleet substrates and do not provide per-class governance regime composition. The mesh-class governance described here treats mesh substrates as first-class participants in a multi-class composition.

Federated learning frameworks address cross-organization training under privacy constraints but typically assume a fixed topology class (a coordinator and a fleet of clients) and do not generalize to arbitrary cross-topology workloads. The cross-topology mechanism subsumes federated learning as one of many possible deployment patterns expressible within the same agent structure.

Workflow orchestration systems such as Airflow or Argo orchestrate task graphs across heterogeneous infrastructure but require per-task substrate-specific implementations and do not provide tamper-evident cross-topology audit. The mechanism here moves the substrate concern beneath the agent structure rather than treating each task as a substrate-specific implementation.

Disclosure Scope

The disclosure within US 19/230,933 covers cross-topology substrate deployment as a structural property of the cognition-native execution platform, including the canonical field-structure schema and its translator chain, the substrate adapter layer with its substrate-agnostic upper interface, semantic routing as the inter-agent addressing primitive, boundary gateways with topology-class-specific routing and policy evaluation, per-class governance regimes with versioned composition policies, tamper-evident audit chains for cross-topology hops, and migration controllers that preserve slot semantics across substrate transitions.

The scope extends to systems in which the mechanism is applied to cluster, fleet, mesh, and hybrid topologies in any combination, to systems in which agents migrate among these topologies during their execution lifetime, to systems in which workloads compose multiple governance regimes under explicit composition policies, and to systems in which artifacts anchored under separate disclosures are consulted across topology boundaries by agents structured under this disclosure.

The scope extends further to deployments in regulated, multi-jurisdictional, and privacy-sensitive domains, to development-to-production progressions in which the same agent definitions move unchanged across substrate classes, and to integration patterns in which the platform interoperates with existing orchestrators, service meshes, edge frameworks, and overlay networks via the substrate adapter layer.

The scope does not extend to systems that require substrate-specific agent rewriting at topology boundaries, that route by network identity rather than by semantic capability, that lack per-class governance composition, or that lack tamper-evident audit of cross-substrate hops.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01