Horizontally Composable Protocol Stack: Independent Layers Operating in Parallel

by Nick Clark | Published March 27, 2026 | PDF

The memory-native protocol stack defined in Provisional Application 64/050,895 organizes its functions as horizontally composable layers rather than the strictly stratified layering inherited from the OSI lineage. Each layer attaches to and detaches from the agent-resident memory substrate as an independent unit, consumes the same canonical data view in parallel, appends its own trace into a shared lineage record, and remains subject to its own per-layer governance policy. A node may omit any layer for which it lacks the required capability or authority and still participate in the protocol; the remaining layers do not break, do not stall waiting for absent peers, and do not silently fall back to weaker behavior. Composability is therefore a structural property of the stack itself, not a deployment convention.


Mechanism

The horizontally composable stack treats every protocol function as a discrete layer module that binds to a uniform agent-resident memory interface. The interface exposes the current canonical content of the agent's working memory, a typed lineage log that supports append-only writes, and a capability descriptor that identifies which functions the host node is authorized and equipped to perform. Each layer module declares, on attachment, the schema of the memory views it intends to read, the schema of the lineage entries it intends to write, and the per-layer governance predicate under which its writes are admissible. The agent runtime validates these declarations against the capability descriptor before the layer is permitted to attach, and re-validates whenever the descriptor changes.

Once attached, layers run as parallel consumers. They do not form a pipeline in which the output of one layer becomes the input of the next; instead, each layer reads from the same memory view and writes to the same lineage record concurrently. The lineage log is the only shared mutable state, and it is structured so that concurrent appends from independent layers do not conflict: each entry is keyed by the originating layer identifier, the monotonic memory version it was computed against, and a content hash of the layer's output. Verifiers traversing the lineage can therefore reconstruct, for any moment in the agent's history, exactly which layers were attached, which memory version each consulted, and what each contributed.

Detachment is symmetric. A layer may be removed at any time without coordination with the layers that remain. Because no layer depends on another's output as input, removal does not invalidate work in progress. The lineage entries written by the detached layer remain in the log as historical record; the absence of new entries from that layer is itself information that downstream verifiers can interpret. The stack does not attempt to mask absence or to substitute default behavior for a layer that has been removed; the verifier sees the omission directly and applies its own policy to it.

Per-layer governance is enforced by the layer module itself, which carries its governance predicate alongside its functional logic. The predicate evaluates the proposed lineage entry against the policy state that is itself memory-resident and signed. If the predicate rejects the entry, the layer's write is refused and the rejection is logged. There is no central admission controller; each layer adjudicates its own writes against its own policy, and the parallelism of the stack means that one layer's policy violation does not stall the others.

Operating Parameters

A working implementation operates within a defined envelope. The number of concurrently attached layers is bounded by the agent runtime's capability descriptor, which in reference deployments admits between four and sixteen layers; the upper bound reflects the cost of validating concurrent appends against the lineage log rather than any logical limit on composition. The memory view exposed to layers is versioned at a granularity coarse enough to amortize hashing cost across multiple reads but fine enough to keep the visible staleness below the round-trip latency between cooperating agents; reference parameters place the version interval between one and ten milliseconds.

Lineage entries are sized to a fixed envelope of typed fields plus a variable-length opaque payload, with the payload bounded so that a full lineage record can be transmitted within a single network frame for the most common deployment topologies. When a layer needs to record more than the payload bound permits, it records a content hash and stores the underlying material in a side log that the lineage entry references. This keeps the lineage record itself small and verifiable without limiting the expressive power of any individual layer.

Per-layer governance predicates are required to be total functions over the declared input schema and to terminate within a bounded time. The runtime measures predicate execution and disables layers that exceed the bound, recording the disablement in the lineage so that verifiers can distinguish a layer that chose not to write from a layer that was forcibly silenced. Capability descriptors are signed by the operator of the host node and are themselves subject to a meta-policy that prevents a node from advertising capabilities it does not possess.

Attachment and detachment are designed to be inexpensive. The cost of attaching a layer is dominated by validation of its declared schemas against the capability descriptor and registration of its governance predicate; both are bounded operations that do not require quiescing the rest of the stack. Detachment is similarly local: the layer module stops consuming the memory view and stops appending to the lineage, and the runtime records the detachment event. No global barrier is required.

Alternative Embodiments

The horizontally composable stack admits embodiments at several scales. In a single-process embodiment, layer modules are loaded as in-process plug-ins sharing a memory-mapped view of the agent's working memory and a lock-free lineage append structure. This embodiment minimizes overhead and is suited to tightly coupled agents that nonetheless benefit from the auditability of per-layer lineage. The same logical structure scales to a multi-process embodiment in which each layer runs as an isolated process, the memory view is exposed via a shared mapping with read-only semantics for layer processes, and the lineage append is mediated by a small kernel module that enforces ordering.

A distributed embodiment places layers on separate physical nodes that share a replicated memory view via a consensus or CRDT substrate. The lineage record is correspondingly replicated, and per-layer governance predicates are evaluated locally at each node. This embodiment is appropriate for federations in which different organizations own different layers and do not wish to grant each other code-execution authority. The composability property is preserved because layers still consume the same canonical view in parallel and still append independently; the only difference is that the substrate carrying the view and the lineage is itself distributed.

Edge embodiments contemplate nodes whose capability descriptors advertise only a subset of layers. A constrained device may attach a minimal set sufficient for its role and rely on cooperating peers to operate the remaining layers. The stack's composability ensures that the constrained node remains a first-class participant: the layers it operates produce verifiable lineage, and the layers it omits are visibly absent rather than silently faked. This permits heterogeneous deployments without bifurcating the protocol.

Embodiments may also vary the memory substrate itself. Working memory may be backed by a content-addressed store, a tuple space, a typed event log, or a hybrid arrangement combining several. The layer interface abstracts these substrate choices so that layer modules written against the canonical memory view operate unchanged across substrates. Lineage records may likewise be carried in an in-band log appended to the memory substrate or in an out-of-band ledger reachable via a substrate-resident pointer; both arrangements preserve the verifiability property required of the composable stack.

Finally, embodiments differ in how layers negotiate attachment. A static embodiment fixes the layer set at agent start and disallows runtime reconfiguration; a dynamic embodiment permits attachment and detachment at any time subject to capability validation. Hybrid embodiments fix a core set of mandatory layers and permit optional layers to attach and detach freely. The patent claims encompass all such arrangements provided that the layers remain independent consumers of the memory view, append independently to a shared lineage, and operate under per-layer governance.

Composition with Other Mechanisms

The horizontally composable stack composes with other mechanisms of the memory-native protocol because it is built on the same agent-resident memory substrate they require. Lineage-bearing transport, structural addressing, and policy-resident routing all read and write the same memory view that the composable layers consume; their lineage entries appear in the same shared record, and their governance predicates participate in the same per-layer enforcement regime. A composable layer may delegate its substantive work to one of these mechanisms while contributing its own trace, or it may operate alongside them as a peer.

Composition with policy and governance frameworks is direct: the per-layer governance predicate accepts policy expressions of arbitrary complexity provided they evaluate within the predicate's time bound. Organizations may attach a layer whose sole purpose is to enforce a regulatory predicate over the writes of other layers, or they may distribute regulatory enforcement across multiple specialized layers. Either arrangement preserves the composability property because the policy layers are themselves independent consumers of the memory view.

Composition with verification and audit tooling is supported through the lineage record itself. External auditors do not need access to the running agent; they need only the lineage record and the capability descriptors that were in force during the period of interest. The composable stack thereby separates operation from audit, allowing the audit surface to be exposed without exposing the operational substrate.

Prior-Art Distinctions

Conventional protocol stacks built on the OSI or TCP/IP lineage are vertically layered: each layer consumes the output of the layer below and produces input for the layer above. Removing or substituting a layer requires that adjacent layers tolerate the change, and in practice this tolerance is limited. The horizontally composable stack departs from this arrangement by removing the input-output dependency between layers entirely; layers consume from a shared memory view and contribute to a shared lineage, never to one another directly.

Plug-in and middleware architectures permit functions to be added and removed at runtime but typically rely on a host process that mediates all data flow and enforces a single governance regime. The composable stack distributes governance across layers, eliminates the central mediator, and exposes the absence of a layer to verifiers rather than masking it. Service-mesh sidecars share some of the parallel-consumer character but operate over network traffic rather than agent-resident memory and lack the typed lineage structure that makes per-layer audit tractable.

Capability-based systems anticipate the use of capability descriptors to gate the attachment of functional modules, but they do not address the coordination of parallel writes to a shared verifiable record nor the per-layer governance predicate. Lineage-tracking systems anticipate the recording of provenance but typically treat lineage as a secondary artifact produced by a primary pipeline. The composable stack inverts this relationship: lineage is the shared structure through which layers coordinate, and functional output is a property of the lineage rather than of a separate data path.

Disclosure Scope

This article describes the horizontally composable protocol stack as disclosed in Provisional Application 64/050,895 covering the memory-native protocol for cognition-compatible networking. The disclosure encompasses the layer attachment and detachment mechanism, the parallel consumption of an agent-resident memory view, the shared append-only lineage record into which layers contribute independently, the per-layer governance predicate carried by each layer module, and the capability descriptor that gates attachment.

The scope extends to embodiments that vary the memory substrate, the lineage carrier, the layer execution environment, the attachment lifecycle, and the distribution of layers across processes or nodes, provided that the layers remain independent consumers of a shared canonical view, append independently to a shared lineage, and operate under per-layer governance. Particular implementations of any single layer's substantive function are out of scope for this disclosure except insofar as they exemplify the composability property.

Claims arising from the disclosure cover the structural arrangement and its operational consequences, including the verifiable absence of omitted layers, the auditability of per-layer contributions, and the resilience of the stack to runtime reconfiguration. Implementations practicing one or more of these features in combination fall within the claim scope regardless of the specific networking, cognition, or governance domain in which they are deployed.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01