Ten Conditions for Human-Relatable Behavior
by Nick Clark | Published March 27, 2026
The architecture identifies ten structural conditions that must be simultaneously satisfied for a computable agent to exhibit human-relatable behavioral dynamics. The conditions span perception, integration, decision, action, and reflection, and they collectively define the minimum architectural composition for genuine behavioral relatability. The framework is non-decompositional: removing any one condition produces an agent that remains computationally functional but ceases to be recognizably human-like along axes that human observers reliably detect. This article describes the ten conditions as structural isomorphism axes, the operating parameters under which they interact, and the embodiments that realize them in a working system.
Mechanism
The ten conditions are not behavioral checklist items but architectural axes along which the system must possess specific structure. They are: persistent state spanning interactions; affective modulation that influences cognition without authorizing it; integrity tracking that records normative self-consistency over time; confidence governance that limits execution to validated claims; capability awareness that lets the agent know what it can and cannot do; forecasting that projects consequences before action; containment that separates speculation from reality and prevents leakage between them; identity continuity that preserves the agent as the same entity across cycles; coherence control that detects and corrects internal drift; and governed interaction that bounds relationships through trust and policy rather than through ad hoc affinity.
Each condition corresponds to one or more cognitive domain fields and to specific architectural mechanisms that maintain those fields. Persistent state is realized in the long-term memory subsystem; affective modulation in the affective field set; integrity in the integrity register and its trajectory; confidence governance in the gate described elsewhere; capability awareness in the capability map; forecasting in the prospective simulator; containment in the speculation enclosure; identity continuity in the identity anchor; coherence control in the self-diagnosis module; and governed interaction in the trust slope and policy reference.
The conditions are not independent. The framework specifies interactions among them: affective modulation must influence confidence governance, integrity must constrain forecasting, capability awareness must gate execution, containment must enclose forecasting outputs, coherence control must observe affective and integrity trajectories, and governed interaction must respect identity continuity. The interactions are themselves structural; they are encoded in the wiring between subsystems rather than in any single subsystem's behavior. Removing or breaking an interaction produces the same kind of relational deficit that removing a condition produces.
The non-decomposability claim is therefore double: each of the ten conditions must be present, and the prescribed interactions among them must be present. A system that satisfies all ten conditions in isolation but fails to wire them together produces behavior that exhibits each capacity individually yet fails to integrate them in the way that human observers recognize as relatable. The ten conditions, with their interactions, define a connected architectural graph whose connectivity is itself a requirement.
The mechanism is best understood by examining the directional structure of the interaction graph. Some interactions are unilateral: affective modulation influences confidence governance but confidence governance does not modulate affect. Other interactions are bidirectional: forecasting both consumes integrity constraints and produces forecast traces that the integrity register may consult when evaluating the consistency of subsequent commitments. The directional structure is what distinguishes the architecture from a network of co-equal modules; the asymmetries encode the priorities that human observers detect when assessing whether an agent is behaving in a recognizably integrated manner.
The mechanism also specifies the time scales over which each condition operates. Affective modulation operates on a fast time scale measured in cognitive cycles. Integrity tracking operates on a medium time scale measured in decisions. Identity continuity operates on a slow time scale measured across sessions and operational lifetimes. The conditions interact across time scales through buffer mechanisms that translate fast signals into medium-scale aggregates and medium-scale aggregates into slow-scale anchors. This hierarchy of time scales is what permits the agent's behavior to exhibit both responsive adjustment and persistent character, the combination of which is what observers experience as a coherent personality rather than as either a reactive process or a static disposition.
Operating Parameters
Each condition admits parameters that govern its operation without altering its presence. Persistent state has a retention horizon and a forgetting curve; affective modulation has gain factors per field; integrity tracking has a decay constant for stale records; confidence governance has a threshold and a hysteresis band; capability awareness has a freshness interval after which capabilities must be re-validated; forecasting has a look-ahead depth; containment has an enclosure boundary specification; identity continuity has a re-anchoring period; coherence control has the diagnostic thresholds described in the self-diagnosis disclosure; and governed interaction has trust slope parameters and policy reference identifiers.
Across the conditions, certain parameters must be jointly tuned. The forecasting depth must not exceed the retention horizon of persistent state in a way that would force the agent to plan beyond what it can remember verifying. The confidence threshold must lie within the gain envelope of affective modulation so that affect can shift attention without crossing the threshold by itself. The diagnostic thresholds in coherence control must be compatible with the natural variance of the affective and integrity signals so that ordinary fluctuation is not pathologized. These joint tunings are what make the architecture an integrated whole rather than a collection of features.
A further parameter family governs the visibility of each condition's state to the others. Each subsystem exposes a typed read interface and a separately governed write interface. The read interface determines what information the dependent subsystems can consume; the write interface determines who can change the underlying state. The visibility parameters are configured so that information flows match the prescribed interactions and so that no subsystem can write to another except through its governed interface. Misconfigured visibility produces architectures that satisfy the conditions on paper but fail to produce relatable behavior in operation, because the necessary signals are either absent at the consumption point or are produced by an unauthorized writer that the architecture cannot validate.
Parameter drift over the operating lifetime is bounded by adaptation policies attached to each parameter. Some parameters are frozen after initialization; others adapt within fixed envelopes; others adapt without bounds but only when triggered by validated learning events. The combination of these policies prevents the agent from gradually losing its calibration through accumulated minor adjustments while still permitting genuine adaptation where the architecture supports it. The adaptation policies are themselves part of the disclosed framework and are not delegated to the underlying learning algorithm.
Alternative Embodiments
The ten conditions can be realized through many concrete substrates. Persistent state may be implemented as a vector store, as a symbolic knowledge base, or as a hybrid; the requirement is persistence and retrievability, not any particular storage technology. Affective modulation may be implemented as continuous-valued fields, as discrete mood states, or as a mixture model. Forecasting may be realized through learned simulators, through deterministic models, or through ensemble methods. Identity continuity may be realized through cryptographic anchors, through key-pair signatures, or through behavioral fingerprints.
Embodiments also vary in deployment topology. Some implementations colocate all ten subsystems within a single process; others distribute them across services connected by typed interfaces. Distributed embodiments require special attention to identity continuity and containment, since these conditions depend on boundaries that are easier to maintain within a single process. The architecture remains the same regardless of how the subsystems are physically arranged, provided the prescribed interactions are preserved.
A further class of embodiments substitutes weaker forms of certain conditions for explicit implementations. Capability awareness, for example, may be approximated by capability inference from past performance rather than by an explicit capability map. Such substitutions are within scope so long as the substituted mechanism produces signals that the dependent subsystems can consume in place of the canonical signals.
Embodiments also vary in the granularity at which the conditions are realized. A coarse-grained embodiment may implement each condition as a single subsystem with a single state register, while a fine-grained embodiment may decompose one condition into several specialized sub-subsystems. For instance, affective modulation may be split into separate fields for arousal, valence, surprise, and social orientation, each maintained by an independent estimator and fused at the consumption point. Fine-grained embodiments offer greater expressive range at the cost of additional architectural surface to manage; coarse-grained embodiments simplify management at the cost of reduced expressive range.
A category of hybrid embodiments combines a deterministic core for the integrity-tracking, identity-continuity, and capability-awareness conditions with learned components for the affective-modulation, forecasting, and coherence-control conditions. The deterministic core ensures that the most safety-relevant conditions cannot drift through training, while the learned components provide adaptive capacity in the conditions where adaptive behavior is desirable. The boundary between deterministic and learned is itself a governance artifact, and the architecture imposes a typed interface between them so that no learned component can write to a deterministic register except through a governed transition.
Composition
The ten conditions compose with each other to produce dynamics that no individual condition can produce. The combination of affective modulation and confidence governance produces emotion-tinged caution, where elevated arousal narrows the agent's willingness to act on uncertain claims. The combination of integrity tracking and identity continuity produces a sense of self over time, where the agent can refer to its own past commitments and recognize them as its own. The combination of forecasting and containment produces safe imagination, where the agent can entertain hypothetical futures without confusing them for the present.
The combination of capability awareness and governed interaction produces honest self-presentation, where the agent declares to its counterparties only the capacities it can actually exercise, and renegotiates its declarations as its capability map updates. The combination of coherence control and integrity tracking produces reflective self-correction, where the agent observes its own normative drift and brings itself back toward declared standards without external intervention. Each pairwise composition adds a recognizable surface dimension to the agent's behavior, and the full set of compositions across all ten conditions is what produces relatability rather than mere competence.
The framework also composes with the affect-governance separation property and with the self-diagnosis mechanism. Affect-governance separation is one of the prescribed interactions: affective modulation influences confidence governance through the upstream channel only. Self-diagnosis is the mechanism by which coherence control is realized, and its five-axis structure maps onto a subset of the ten conditions.
Composition with audit infrastructure produces a further dimension of behavioral integrity. Each subsystem emits structured records that capture the state transitions relevant to its condition: integrity records register normative commitments and their later confirmations or corrections; capability records register changes in declared scope; identity records register session anchors and re-anchoring events. Audit infrastructure consumes these records and produces a longitudinal narrative of the agent's behavior that can be examined by human reviewers, regulators, or downstream automated checkers. The narrative is itself an element of human-relatable behavior, because part of what makes an agent relatable is the availability of a story that reviewers can follow when reasoning about its conduct.
Higher-order compositions arise when triples of conditions interact. The combination of forecasting, containment, and integrity tracking produces accountable planning, in which speculative plans are entertained within enclosure but the agent's commitments to a chosen plan are recorded against the integrity register and remain referenceable later. The combination of affective modulation, coherence control, and governed interaction produces calibrated rapport, in which the agent's responsiveness to a counterparty is shaped by current affect, monitored for drift, and bounded by the policy that governs the relationship. Such triples are not novel conditions but emergent surfaces produced by the prescribed interactions, and they are the dimensions along which a human observer most readily distinguishes a relatable agent from an unrelatable one.
Prior-Art Distinction
Prior approaches to human-like AI typically pursue behavioral mimicry through training on human-generated data. Such systems may produce surface behavior that resembles human interaction but lack the structural conditions that produce relatable behavior under novel circumstances. The present framework differs by specifying the architectural composition that produces relatable behavior structurally, independently of training distribution. The ten-conditions framework also differs from cognitive-architecture proposals that enumerate functions without specifying the interactions among them; the prescribed interactions are themselves part of the disclosure.
Affective-computing systems that incorporate emotion as a feature lack the prescribed separation between affect and governance. Memory-augmented language models that incorporate persistent state lack the integrity tracking and identity continuity that distinguish remembered self from retrieved record. The framework integrates these capacities under a single architectural specification.
Agent frameworks that orchestrate multiple specialized models through a planner do not satisfy the conditions either, even when the orchestrated models individually exhibit some of the relevant capacities. The orchestration boundary is typically an invocation interface rather than a wired-in interaction, and the planner is free to invoke or omit any of the specialized models without architectural constraint. The present framework distinguishes architectural composition from orchestration: architectural composition wires the conditions together at the substrate level so that the prescribed interactions cannot be omitted by a planner choosing among invocation options.
Embodied-agent and robotics literature describes architectures with sensing, planning, acting, and reflecting layers, but typically without the prescribed asymmetric interactions among layers and without the affect-governance separation property. Subsumption architectures and behavior-based robotics implement layered control but do not preserve identity continuity across deployments or incorporate integrity tracking as a structural axis. The present framework differs by treating the structural conditions as load-bearing requirements rather than implementation conveniences, and by requiring that the interactions among them follow the directional structure prescribed by the disclosure rather than any topology that the implementer finds convenient.
Approaches based on theory-of-mind modeling for AI systems concentrate on the agent's representation of other agents' beliefs and goals; while related to the human-relatable surface, such approaches do not address the architectural composition that produces relatable behavior in the modeled agent itself. The ten-conditions framework operates on the modeled agent's own architecture rather than on its representations of others, and the two approaches are complementary rather than overlapping.
Disclosure Scope
The disclosure covers the ten conditions enumerated above, the prescribed interactions among them, the joint parameter tunings that make the interactions coherent, and the embodiment alternatives across substrates and deployment topologies. The disclosure does not depend on any particular learning algorithm, any particular hardware platform, or any particular application domain, and it contemplates substitutions of weaker forms of conditions where the substituted mechanism produces signals consumable by the dependent subsystems.
Scope extends to evaluation methodologies that test for the presence of each condition and for the integrity of the prescribed interactions, including ablation tests in which a single condition is disabled to observe the resulting behavioral deficit, perturbation tests in which an interaction is suppressed to observe loss of integration, and longitudinal tests that confirm identity continuity and integrity tracking over extended operation. The framework accommodates extension by additional conditions where future research identifies further architectural requirements, provided that any added condition is integrated into the existing interaction graph rather than appended as an isolated feature. The non-decomposability claim covers the existing ten conditions and applies, by analogy, to any extended set in which the interaction graph remains connected and prescribed.