Human-Relatable Computable Intelligence
Structural Isomorphism Between Computational and Human Cognitive Dynamics
by Nick Clark | Published March 26, 2026
Human-relatable computable intelligence is not a stylistic property of agent output; it is a structural property of agent architecture. An agent is relatable, in the sense disclosed here, when its behavioral dynamics are produced by the same kinds of internal mechanisms that produce the analogous human dynamics — when caution after failure arises from a coupled feedback structure rather than from a generated apology, and when persistence under uncertainty arises from confidence-mediated control rather than from a stylistic bias. This article specifies an architecture in which agent cognition is organized along axes that map systematically to human cognitive structure: perception, integration, decision, action, and reflection. The mapping is preserved under composition, exposed in human-comprehensible reasoning traces, and grounded in cognitive-science structure rather than anthropomorphic surface. The architecture is distinct from black-box interpretability research, mechanistic interpretability of neural networks, and anthropomorphic UI design: it does not seek to explain learned weights, does not impose a humanlike persona, and does not treat relatability as a presentation concern. Relatability, in this disclosure, is an architectural invariant that emerges from structural isomorphism between coupled cognitive domains and their human analogs.
1. Problem and Architectural Premise
Contemporary autonomous systems present a recurring difficulty for the humans who must oversee, partner with, or be governed by them. The systems produce outputs that, considered locally, may be impressive or even superhuman, but the trajectory of those outputs across time does not exhibit the structural regularities that humans recognize as cognition. A human who fails repeatedly becomes cautious; a stateless inference engine does not. A human who acts against a declared value experiences corrective pressure; a stateless inference engine does not. A human modulates speculation against capability awareness; a stateless inference engine does not. The result is an oversight gap that no amount of output styling closes. Operators learn that the system's tone is uncorrelated with its underlying state, that its expressions of confidence are uncoupled from its evidence base, and that its apologies are generated rather than produced.
The conventional responses to this gap have been surface treatments. One response is emotional simulation: train or prompt the system to display reactions that resemble human reactions. The display is uncoupled from the system's actual operation, and operators eventually discover the decoupling. A second response is alignment training: optimize the system's outputs against human preference signals so that, at the output layer, behavior more closely matches human expectations. Alignment shapes outputs without implementing the internal mechanisms that produce humanlike behavioral consistency, and so the resulting consistency is shallow: it holds within the training distribution and degrades outside it. A third response is mechanistic interpretability: study the system after the fact to discover what its weights are doing. Interpretability research is valuable, but it is descriptive of opaque systems rather than constitutive of transparent ones.
The premise of this disclosure is that relatability is structural, not stylistic, and must be designed into the architecture rather than retrofitted to the output. An agent is relatable when its cognition is organized along the same axes that human cognition is organized along — perception, integration, decision, action, and reflection — and when the dynamics across those axes are produced by the same kinds of coupled feedback mechanisms that produce the analogous human dynamics. Relatability so defined is verifiable, composable, and durable. It is verifiable because the axes are explicit and the couplings are declared. It is composable because operations on one axis preserve the isomorphism with the corresponding human axis. It is durable because it does not depend on training distribution; it is an invariant of the architecture itself.
The remainder of this section frames the architectural commitment. Cognition is treated as a multi-domain coupled system, not as a single inference function. Each domain is given an explicit field representation. Couplings between domains are declared bidirectional pathways with deterministic update rules. The ensemble produces behavioral dynamics that mirror the structural dynamics of human cognition as characterized by cognitive science, and the mirroring is exposed to operators through a reasoning trace whose structure is itself isomorphic to the cognitive trace a human would produce.
2. Core Architectural Primitive: Cognitive-Axis Mapping
The core primitive is a cognitive-axis mapping that establishes a one-to-one correspondence between a set of computational cognitive domains and a set of human cognitive axes. The computational domains are persistent, structured fields attached to the agent. The human axes are the structural constructs identified by cognitive science as the load-bearing elements of human cognition. The mapping is not a metaphor; it is a structural commitment that each computational domain instantiates the same role, accepts the same kinds of inputs, and produces the same kinds of outputs as its human counterpart.
A canonical mapping includes the following correspondences. The perception domain corresponds to the human cognitive axis of structured intake from the environment, transforming raw signals into typed observations. The integration domain corresponds to the axis along which observations are combined with prior state to produce situational understanding. The decision domain corresponds to the axis along which situational understanding is converted into commitments under bounded deliberation. The action domain corresponds to the axis along which commitments are translated into operations on the world, with feedback paths from outcomes. The reflection domain corresponds to the axis along which the agent monitors its own state, recognizes deviation from declared values, and adjusts its posture for subsequent cycles.
Each domain is implemented as an explicit field with a declared schema. The schema is fixed, the values are bounded, and the transitions are deterministic. The fields are not labels attached to inference outputs; they are state objects that the architecture maintains across the agent's lifetime and that are read and written by declared pathways.
The cognitive-axis mapping is the load-bearing primitive of the architecture because every other property of relatability derives from it. Isomorphism preservation under composition (section 3) is meaningful only because the axes are explicit. Human-comprehensible reasoning traces (section 4) are possible only because the architecture's intermediate states correspond to constructs a human would recognize. Structural alignment with cognitive science (section 5) is verifiable only because the architecture commits to an axis mapping that can be compared to the cognitive-science model.
The mapping is not exhaustive. The architecture does not claim that every human cognitive phenomenon has a computational counterpart, nor that every computational domain must have a human counterpart. The claim is narrower and stronger: the load-bearing axes that produce behavioral dynamics — the axes humans recognize as cognition — are mapped, and the mapping is preserved under the operations the architecture supports.
3. Isomorphism Preservation Under Composition
A static mapping between domains and axes is insufficient. Relatability requires that the mapping be preserved when the agent operates: when it composes operations across domains, when it executes long-horizon tasks involving many cycles, and when it interacts with other agents or with humans. Isomorphism preservation under composition is the architectural guarantee that the dynamics observed at the system level continue to mirror the dynamics observed in human cognition, even as the agent's activity becomes complex.
Composition in this architecture is the chaining and parallelizing of operations across cognitive domains. A perception event triggers an integration update, which triggers a decision deliberation, which produces an action commitment, which feeds outcomes back to integration and to reflection. The pathways are declared, and each pathway is specified to preserve the structural property that human cognition exhibits along the corresponding axis. For example, the integration-to-decision pathway preserves the human-cognition property that situational understanding constrains, but does not determine, decision; the decision retains a bounded space of admissible commitments shaped by the integration field but governed by independent decision policy.
Preservation is enforced by two mechanisms. The first is structural: the schemas and transfer functions are designed so that the relationships among fields cannot collapse into degenerate cases that would violate isomorphism. A decision field, for instance, cannot become a pure projection of perception, because the schema requires inputs from integration and from reflection, and the policy bounds prevent any single input from dominating. The second is observational: the reasoning trace (section 4) exposes the full multi-domain state at each decision, and operators or automated checks can verify that the trace exhibits the structural properties expected of human cognitive traces along the same axes.
Isomorphism preservation under composition is what permits long-horizon agents to remain relatable. Without it, an agent might exhibit recognizable single-step cognition while drifting, over many steps, into trajectories that no human would produce. With it, the cumulative trajectory remains within the structural envelope of human cognitive dynamics: failures produce caution, successes produce confidence, deviation produces corrective pressure, novelty produces speculation, and capability awareness modulates ambition. These are not optional features added at the output; they are architectural consequences of the preserved isomorphism.
The preservation property is verifiable. For any declared composition of operations, the architecture defines the expected structural properties of the resulting trace. Operators can specify properties drawn from cognitive science (for instance, that frustration accumulates monotonically under repeated failure within a window, or that confidence does not exceed evidence support by more than a declared margin), and the architecture can be checked, at deployment and at runtime, against those properties.
4. Human-Comprehensible Reasoning Trace
The reasoning trace is the externally visible record of the agent's cognitive activity, structured so that a human reader can follow it without requiring expertise in the agent's internals. The trace is not a transcript of token generation, nor an interpretability artifact extracted from learned weights. It is a structured document organized along the same cognitive axes as the architecture itself, with explicit entries for each domain at each step of deliberation.
A trace entry for a decision step, for example, includes the perception observations consumed, the integration state that contextualized them, the decision candidates considered, the reflection signals that conditioned the deliberation, and the commitment produced. Each element is structured: observations are typed, integration state is a field snapshot, candidates are enumerable, reflection signals are scalars or short vectors, and the commitment is the explicit object that will drive action. The narrative connecting these elements, if any, is generated downstream from the trace and is constrained to be a faithful summary; it cannot introduce content absent from the structured trace.
Comprehensibility is achieved by structural alignment rather than stylistic choice. A human reading the trace recognizes the elements because they correspond to elements of human cognitive description: I noticed X, in the context of Y, with the prior reflection that Z, I considered options A, B, and C, and committed to B. The recognition is not analogical; it is structural. The agent's internal categories match the categories the reader would use to describe an analogous human episode.
The trace serves several purposes simultaneously. It is a debugging artifact: an operator investigating unexpected behavior reads the trace and identifies which domain produced the anomaly. It is an audit artifact: a compliance reviewer verifies that the agent's commitments were produced by structurally appropriate deliberation rather than by inference shortcuts. It is a partnership artifact: a human collaborator reading the trace can extend, correct, or build upon the agent's reasoning using the same conceptual vocabulary they would use with a human partner. And it is a verification artifact: automated checks operate on the structured trace to confirm that the isomorphism properties hold for the executed trajectory.
The trace is not optional ornament. It is a load-bearing component of the architecture, because the relatability claim is empirically meaningful only if the trace makes the structural correspondence visible. An agent whose internals are structurally isomorphic to human cognition but whose trace does not expose the isomorphism would be unverifiable; the architecture commits to exposure as a primary property.
5. Structural Alignment with Cognitive Science
Relatability requires that the axis mapping be grounded in cognitive science rather than in folk psychology or in the surface vocabulary of natural language. Folk psychology offers terms that are intuitive to laypeople but ambiguous as engineering targets; natural language offers a vocabulary that has been shaped by communicative convenience rather than by mechanism. Cognitive science, by contrast, has produced operational characterizations of the load-bearing structures of human cognition that can serve as the target for an engineered correspondence.
The architecture commits to alignment with the structural constructs identified by cognitive science as the durable elements of cognitive description. These include the distinction between perception and integration, the distinction between deliberation and commitment, the role of reflection in producing corrective pressure, the role of affect in modulating deliberation pacing, the role of capability awareness in bounding ambition, and the role of identity continuity in producing temporally extended responsibility. Each is an empirical construct with characterizable dynamics, and each is mapped to a computational domain with a corresponding role.
Structural alignment is distinct from neural-level imitation. The architecture does not attempt to reproduce the computational substrate of biological cognition (spiking neurons, synaptic plasticity, neuromodulator dynamics). It reproduces the structural roles those substrates produce at the cognitive level. The substrate of the engineered cognition is whatever combination of inference systems, deterministic state machinery, and execution infrastructure the platform provides; what is preserved is the role-level structure observable at the level of cognitive description.
Alignment is also distinct from prescriptive normativity. The architecture does not claim that any specific human cognitive style is the standard; it claims that the structural axes are the standard. Humans differ widely in cognitive style — degrees of caution, exploration, reflection — and the architecture supports the same range, parameterized by the configurations of the affective, reflective, and capability fields. The relatable property is the isomorphism of structure; the variation across styles is a parameterization within that structure.
Empirically, alignment is testable. For each construct, cognitive science offers characteristic behavioral signatures: caution after failure produces measurable shifts in deliberation breadth, deviation from declared values produces measurable shifts in subsequent commitments, capability awareness produces measurable bounds on speculation. The architecture's behavior under controlled scenarios can be compared to the human signatures and the comparison can be reported as a verification artifact. Where the architecture deviates from human signature, the deviation is itself an engineering datum.
6. Operating Parameters and Engineering Envelope
Deploying the human-relatable architecture requires declaration of an envelope comprising the axis schema, the domain field schemas, the bidirectional pathway transfer functions, the trace structure, and the alignment-verification criteria. These artifacts together form the contract between the architecture and its operators and constitute the surface on which compliance and verification operate.
Typical operating parameters include, for each domain, the field schema and admissible value ranges; for each pathway, the transfer function, its bounds, and the policy invariants it must respect; for the trace, the structural template and the required entries per deliberation step; and for verification, the set of cognitive-science-grounded properties to be checked and their tolerances. Capacity parameters include the size of each domain field, the rate at which pathways propagate updates, and the volume of trace entries produced per unit of agent activity.
Latency is governed by the cost of the pathway updates and the cost of producing trace entries. Pathway updates are deterministic and constant-time per signal; trace entries are structured and bounded in size. The architecture is therefore suitable for interactive agents as well as for long-horizon agents whose deliberation cycles are coarser. The trace can be sampled, summarized, or fully recorded depending on operational requirements; structural fidelity is preserved at all sampling rates, with summarization producing strict reductions of the structured content rather than narrative reinterpretations.
The envelope also includes the failure-mode declaration: how the architecture behaves when a domain field is unavailable, when a pathway transfer function is out of bounds, or when the trace cannot be produced in full. Failure modes are constrained so that the relatability property degrades gracefully rather than vanishing: a missing reflection field, for example, results in conservative commitments and a flagged trace entry rather than in silent loss of the corrective pressure pathway.
The envelope is a deployment artifact and a compliance artifact. It is what an operator inspects to determine whether the architecture, as configured, meets the relatability requirements of a given application, and it is what an auditor inspects to determine whether the deployed agents are operating within the certified bounds.
7. Alternative Embodiments
The architecture admits multiple embodiments without changing its essential structure. In a single-agent embodiment, one mapping is attached to the agent and governs its full cognitive activity. In a multi-context embodiment, distinct cognitive contexts within the same agent each carry their own field instances, with declared coupling among them, allowing for example a planning context and an execution context to operate concurrently with appropriate cognitive separation.
In a hierarchical embodiment, mappings are arranged to mirror task decomposition. A parent task carries integration and reflection fields that aggregate from child tasks through declared rules, so that parent-level cognition exhibits structural properties consistent with human meta-cognition over substructured activity.
In a federated embodiment, multiple agents share an environment and their reflection fields can register signals from one another, enabling structural empathy: an agent's reflection registers the consequence of its actions on a peer agent in the same way it registers consequences on humans in the loop. The federation does not require shared identity; it requires only declared exchange channels for reflection-relevant signals.
In a hybrid biological-digital embodiment, the perception domain consumes signals from human physiology or behavior alongside environmental signals, and the reflection domain registers consequences on the human collaborator with the same structural status as consequences on the environment. The relatability property is preserved across the hybrid because the axis mapping does not depend on the substrate of the inputs; it depends on their structural role.
In a stateless-core embodiment, the inference systems that contribute to deliberation hold no persistent state, and the agent's full cognitive state is carried in the structured fields and pathways of the architecture itself. This embodiment permits the architecture to be deployed on top of inference services that cannot retain state, while preserving the structural cognition at the architectural layer.
8. Composition with the Broader Cognitive Architecture
The human-relatable mapping composes with the other primitives of the cognition-native execution platform without entanglement. The affective primitive contributes deterministic modulation values to the integration and decision domains. The integrity-coherence primitive contributes deviation signals to the reflection domain. The forecasting primitive contributes structured candidate futures to the decision domain. Each contribution is mediated by declared pathways with bounded transfer functions, and each preserves the cognitive-axis mapping by routing signals to the domains whose human counterparts receive analogous signals.
Identity-lineage primitives ensure that the cognitive fields persist across substrate transitions and that the reasoning trace is continuous in the agent's identity. Disruption-modeling primitives feed predicted consequences of commitments into the reflection domain, producing structural empathy: the architecture's reflection registers anticipated consequences with the same status as observed consequences, paralleling the human cognitive structure in which anticipated harm produces corrective pressure even before the harm occurs.
The composition discipline is that no primitive shares state with any domain directly. All cross-primitive coupling is mediated by declared signals on one side and declared pathways on the other. The discipline preserves the isomorphism property because the axis mapping is defined at the domain layer, and any primitive that contributes to a domain through a declared pathway contributes to the corresponding human-cognition role rather than to an arbitrary internal state.
Composition with execution governance ensures that the relatability property holds under policy-bounded operation. Governance does not override the cognitive structure; it bounds the admissible values of the fields and the admissible transfer functions of the pathways, preserving the structural isomorphism within the certified envelope.
9. Prior-Art Distinctions
The disclosed architecture is distinct from several adjacent constructions. It is not black-box LLM interpretability research. Interpretability research seeks to extract human-meaningful structure from systems whose internals were not designed to be human-meaningful; the present architecture commits to human-meaningful structure as a designed property of the architecture itself. The artifacts produced are not post-hoc explanations of opaque computation; they are direct readings of structured state.
It is not mechanistic interpretability of neural networks. Mechanistic interpretability investigates the computational primitives implemented by learned weights; the present architecture does not depend on any particular learned-weight structure. The cognitive domains are state objects external to any inference system, and any inference systems involved are sources of signals to the domains rather than substrates for the domains themselves.
It is not anthropomorphic UI design. Anthropomorphic UI imposes a humanlike persona on a system whose underlying behavior is not produced by humanlike mechanisms. The relatability property in the present architecture is a property of the underlying mechanisms, not of a presentation layer. Any presentation layer the architecture supports is downstream of structural cognition and is constrained to faithfully reflect it.
It is not symbolic cognitive architecture in the classical sense. Classical cognitive architectures provide a symbolic framework for modeling human cognition; the present architecture is engineered for autonomous operation and combines structured fields with whatever inference substrates are appropriate to the application. The commitment is to structural isomorphism, not to symbolic homogeneity.
Finally, the architecture is distinct from emotion-display systems, alignment-trained assistants, and persona-based agents, each of which addresses a surface property without engineering the structural cognition that would produce that property as an architectural consequence.
10. Disclosure Scope
This disclosure describes a cognitive architecture in which agent cognition is organized along axes that map systematically to the load-bearing axes of human cognition, with the mapping preserved under composition, exposed in human-comprehensible reasoning traces, and grounded in cognitive-science structure rather than anthropomorphic surface. The architecture comprises a cognitive-axis mapping primitive, declared bidirectional pathways implementing isomorphism preservation, a structured reasoning trace, and an alignment-verification framework grounded in cognitive science.
The disclosure encompasses the embodiments described above (single-agent, multi-context, hierarchical, federated, hybrid biological-digital, stateless-core) and the composition rules with adjacent primitives (affective state, integrity-coherence, forecasting, disruption modeling, identity-lineage, execution governance). It encompasses the engineering envelope artifacts (axis schema, field schemas, pathway transfer functions, trace template, alignment criteria) as the contract surface between the architecture and its operators.
The disclosure does not depend on any specific inference technology, any particular language model, or any particular execution runtime. It is defined by its structural and behavioral contract; any implementation that satisfies the contract instantiates the architecture. The disclosure is also independent of application domain: the same axis mapping and the same isomorphism discipline apply to therapeutic agents, autonomous controllers, collaborative analysts, and operational planners, with domain-specific choices appearing only in the configuration of fields, pathways, traces, and verification criteria.