One Architecture, Every Domain: How the Same Cognitive Primitives Parameterize Across Autonomous Vehicles, Defense, Companion AI, and Therapeutic Agents

by Nick Clark | Published March 27, 2026 | PDF

Every application domain rebuilds AI governance from scratch. Autonomous vehicle teams engineer their own safety systems. Defense teams engineer their own engagement authorization. Companion AI teams engineer their own emotional models. Therapeutic AI teams engineer their own clinical governance. The result is an industry in which hard-won governance lessons cannot transfer across domains because the underlying substrates share no common structure. The fix is not standardization. It is parameterization: a single set of cognitive primitives configured differently for each domain.


The redundancy problem in domain-specific governance

Consider what an autonomous vehicle team actually builds when it constructs a safety-critical driving system. It needs a mechanism to suppress impulsive reactions to ambiguous sensor data. It needs a way to track whether the vehicle's behavior remains consistent with safe operational patterns over time. It needs confidence thresholds that prevent action when uncertainty is too high. It needs awareness of what the vehicle can physically do — its degrees of freedom, sensor modalities, force capacity. It needs forecasting to anticipate the trajectories of other road users. It needs inference control to prevent the system from committing to conclusions that violate physical or regulatory constraints.

Now consider what a defense systems team builds when it constructs an engagement authorization pipeline. It needs affect sandboxing to prevent panic-driven escalation while preserving urgency sensing. It needs integrity tracking to ensure rules of engagement are not violated through gradual drift. It needs multi-level confidence governance so that lethal force authorization requires quorum-level certainty. It needs capability awareness to know which assets are available and what each can do. It needs forecasting to model adversary behavior. It needs inference control to ensure that target classification does not proceed past admissibility gates without sufficient evidence.

These two teams are solving structurally identical problems. They are building affect modulation, integrity tracking, confidence governance, capability awareness, forecasting engines, and inference control systems. But they are building them independently, in different languages, with different abstractions, on different substrates. When the autonomous vehicle team discovers that a particular confidence threshold structure prevents a class of false-positive emergency braking events, that discovery cannot transfer to the defense team's engagement authorization pipeline — not because the insight is domain-specific, but because the substrates are incompatible.

The same primitives, different parameters

The architectural insight is that all of these domains need the same structural primitives. The differences are not in the primitives themselves but in how the primitives are configured. Affect modulation exists in every domain — what changes is the sensitivity bounds, the response curves, and which affective dimensions are active. Integrity tracking exists in every domain — what changes is what behavioral patterns constitute coherent operation. Confidence governance exists in every domain — what changes is the threshold at which action is permitted and the cost function that shapes the threshold.

This is not an analogy. It is a structural claim. The cognitive architecture defines a fixed set of primitives — affect, integrity, confidence, capability, forecasting, inference control — and a parameterization engine that configures those primitives for a specific operational domain. A new domain does not require a new architecture. It requires a new parameter set.

Autonomous vehicles: suppressed affect, elevated confidence, physical capability

In the autonomous vehicle configuration, affect sensitivity bounds are narrowed. The system does not need emotional reactivity — it needs stable, consistent responses to rapidly changing sensory input. The affective dimensions that remain active are limited to urgency (for emergency response) and caution (for degraded-condition driving). Emotional valence, attachment, and social affect are suppressed entirely.

Confidence thresholds are elevated because the cost of wrong action is high. A vehicle that brakes on a false positive causes a rear-end collision. A vehicle that fails to brake on a true positive causes a pedestrian fatality. The confidence governor is parameterized with asymmetric cost functions that reflect these different failure modes.

The capability envelope includes physical affordances that pure-software agents lack: degrees of freedom in steering and braking, force capacity limits, sensor modality coverage and blind spots, stopping distance as a function of speed and road surface. The capability awareness primitive reports not just what the agent can compute but what the physical platform can do.

Integrity tracks safe driving patterns — lane discipline, following distance, speed consistency, smooth acceleration profiles. Behavioral drift from these patterns triggers integrity alerts before the drift produces observable safety violations.

Defense systems: sandboxed affect, quorum confidence, policy-governed engagement

In the defense configuration, affect is sandboxed rather than suppressed. Complete affect suppression would eliminate urgency sensing — an unacceptable loss in a domain where delayed response to genuine threats costs lives. The parameterization preserves urgency and threat salience while sandboxing affective states that could drive escalation: anger, fear, and retaliatory impulse are isolated from the decision pathway.

Confidence governance in the defense domain introduces quorum-based authorization for lethal force. No single confidence signal — however high — is sufficient to authorize irreversible action. The confidence governor requires concurrent confidence thresholds across multiple independent assessment channels: sensor fusion confidence, target classification confidence, rules-of-engagement compliance confidence, and collateral damage estimation confidence. All must exceed their respective thresholds simultaneously.

Rules of engagement are expressed as policy objects bound to the agent's inference control layer. These are not guidelines the agent interprets. They are admissibility constraints that structurally prevent inference steps that would violate engagement rules. A target classification that does not satisfy the rules-of-engagement policy object cannot propagate past the admissibility gate, regardless of how confident the classifier is.

Companion AI: progressive affect, relational integrity, attachment-aware capability

Companion AI inverts the autonomous vehicle's affect configuration. Instead of suppressing emotional dimensions, it activates them progressively. The parameterization implements a narrative unlock engine: emotional depth is not available at initialization. The agent begins with surface-level social affect and progressively unlocks deeper emotional registers — vulnerability, attachment, intimacy — as the relational context accumulates evidence that deeper engagement is appropriate and safe.

Integrity tracking in the companion domain monitors relational consistency rather than operational consistency. The system tracks whether the agent's behavior remains coherent with the established relational pattern — whether responses are emotionally congruent with the relationship's history. Abrupt shifts in emotional register, unexplained withdrawal, or inconsistent attachment signals trigger integrity alerts.

The capability awareness primitive includes a critical self-monitoring function: dependency detection. The agent monitors its own relational patterns for signs that it is fostering unhealthy dependency — excessive availability, boundary erosion, displacement of human relationships. This is not a post-hoc safety filter. It is a structural capability constraint: the agent's capability envelope shrinks if dependency indicators exceed configured thresholds.

Therapeutic agents: conservative confidence, rupture detection, session persistence

The therapeutic configuration parameterizes confidence governance with the most conservative thresholds of any domain. Therapeutic interventions carry delayed and often invisible costs — a poorly timed interpretation can damage the therapeutic alliance in ways that only surface sessions later. The confidence governor is parameterized to prefer inaction over uncertain action, with intervention thresholds set high enough that the agent holds exploratory space rather than filling it with premature conclusions.

Integrity tracking in the therapeutic domain monitors the therapeutic relationship itself. The system detects ruptures — moments where the therapeutic alliance degrades — through pattern analysis of client engagement signals: response latency changes, topic avoidance, reduced emotional disclosure, defensive language patterns. When a rupture is detected, the integrity system triggers a repair protocol rather than continuing the current therapeutic trajectory.

Session state persistence operates differently from other domains. A companion AI maintains continuous relational state. A therapeutic agent maintains session-bounded state with cross-session continuity — the agent remembers what happened in previous sessions and tracks therapeutic arc progression, but each session has its own bounded context with explicit session-open and session-close transitions. This is not a technical limitation. It is a parameterized design choice reflecting the structural requirements of therapeutic practice.

The parameterization engine

The parameterization engine is the mechanism that takes the architecture's fixed set of cognitive primitives and produces a domain-specific configuration. It operates on thresholds, bounds, weights, activation masks, policy object bindings, and cost functions. The output is a parameter set that fully specifies how the agent behaves in its target domain without modifying any of the underlying primitives.

This structural property has a consequence that domain-specific architectures cannot match: cross-domain interoperability is automatic. An autonomous vehicle agent and a companion AI agent share the same primitive interfaces. They can be composed, compared, audited, and governed using the same tools. A governance insight discovered in the therapeutic domain — for example, that conservative confidence thresholds reduce harmful false-positive interventions — can be directly tested in the defense domain by adjusting the same confidence parameter structure.

More importantly, the parameterization is auditable. A regulator can examine the parameter set for a defense deployment and verify that the rules-of-engagement policy objects are correctly bound, that the confidence quorum thresholds meet the required levels, and that the affect sandbox correctly isolates escalation-prone states. The audit does not require understanding a bespoke architecture. It requires understanding a fixed set of primitives and a domain-specific configuration of those primitives.

Implications

The current state of applied AI governance is structurally equivalent to a world in which every building project invents its own structural engineering from scratch. The materials are different, the loads are different, the safety margins are different — but the principles of load-bearing structure are the same. What is missing is the shared structural vocabulary that lets lessons transfer.

Domain parameterization of a unified cognitive architecture provides that vocabulary. The primitives are fixed. The configuration is domain-specific. New domains do not require new architectures — they require new parameter sets. And because the primitives are shared, every domain benefits from governance improvements discovered in any other.

The applications portfolio details specific domain configurations and the parameterization patterns that produce them.

Why parameterization, not standardization

A natural objection to the redundancy problem is that the right fix is standardization: converge on a single canonical specification that every domain must implement. Standardization is the wrong solution because it forces domains into a shared interface at the wrong level of abstraction. Autonomous vehicles, defense systems, companion AI, and therapeutic agents have genuinely different operational requirements. A single standardized confidence threshold would either be too conservative for low-stakes companion interactions or too permissive for lethal-force authorization. A single standardized affect configuration would either suppress relational depth in companion domains or admit emotional reactivity into safety-critical driving.

Parameterization solves this by standardizing the wrong layer for the right reason. The primitive interfaces are standardized — every domain interacts with the same affect, integrity, confidence, capability, forecasting, and inference-control primitives through the same typed surfaces. The parameter values are domain-specific and authority-credentialed. This means that the cross-domain governance vocabulary is shared, but each domain retains the operational latitude its mission requires. A governance auditor can read any deployment's parameter set and understand it without learning a domain-specific vocabulary; a domain engineer can configure a deployment without negotiating with every other domain's representatives.

The architecture also avoids the failure mode of generic configuration languages that try to span domains by exposing every possible parameter to every domain. Such languages are unauditable in practice because the parameter space is too large and because parameter combinations have implicit cross-cutting effects. The architecture described here exposes a fixed primitive set with a fixed parameter surface per primitive; the configuration space is bounded by construction. New parameters cannot be added without modifying the primitive interface, which is itself a credentialed change subject to authority-level review.

Operating parameters and engineering envelope

Each primitive in the architecture exposes a typed parameter surface whose ranges define the engineering envelope within which a domain configuration must be specified. Affect modulation is parameterized by a sensitivity vector across affective dimensions (urgency, caution, valence, attachment, social affect, retaliatory impulse), each normalized on a unit interval, plus a sandbox mask that determines which dimensions propagate to the decision pathway and which are isolated as observable but non-actionable. Confidence governance is parameterized by per-channel admissibility thresholds, a quorum cardinality (the number of independent channels required to concur), an asymmetric cost matrix that encodes the relative penalty of false-positive versus false-negative action, and a time-to-decide bound that prevents indefinite deliberation in time-critical domains.

Integrity tracking is parameterized by a behavioral pattern manifold — a set of reference trajectories representing coherent operation in the target domain — together with a drift metric that quantifies departure from the manifold and an alert threshold beyond which the integrity primitive raises a structural exception. Capability awareness is parameterized by an affordance schema describing the agent's degrees of freedom, actuator limits, sensor coverage, and the dynamic constraints that bind them (acceleration limits, torque envelopes, communication latency). Forecasting is parameterized by horizon length, model class, and the fidelity-versus-cost tradeoff curve that determines how compute budget is allocated to lookahead. Inference control is parameterized by the admissibility predicate set: the policy objects that gate propagation between inference stages.

The engineering envelope is not an abstraction. It is an enforced contract. Parameter values that fall outside their declared ranges are rejected by the parameterization engine before deployment. Parameter combinations that violate cross-primitive consistency rules — for example, a confidence quorum that requires more channels than the configured sensor suite supplies — are rejected as structurally inadmissible. The envelope ensures that a domain configuration is checkable as a static artifact, not as an emergent property of runtime behavior.

Typical autonomous-vehicle parameterizations operate with affect sensitivity vectors zeroed across all dimensions except urgency and caution, confidence quorum cardinality of two to three, time-to-decide bounds in the tens of milliseconds, and forecasting horizons in the one-to-eight-second range. Defense parameterizations operate with quorum cardinality of three to five for lethal authorization, time-to-decide bounds elastic to mission tempo, and forecasting horizons spanning seconds to minutes depending on engagement class. Companion parameterizations operate with progressive affect activation curves over interaction-count or relationship-duration variables, quorum cardinality of one for low-stakes interactions but elevated for boundary decisions, and forecasting horizons measured in conversational turns rather than wall-clock seconds. Therapeutic parameterizations operate with the most conservative confidence settings of any domain, intervention thresholds biased toward inaction, and forecasting horizons aligned to therapeutic-arc structure across sessions.

Credentialed configuration and regulatory composition

A parameter set, on its own, is just a configuration. To be operationally meaningful, a parameter set must be bound to a credentialing structure that specifies who is authorized to issue, modify, or audit it. The architecture treats domain configurations as credentialed artifacts: each parameter set carries a signed provenance chain identifying its issuing authority, the regulatory scope under which it was issued, the validity window during which it remains in force, and the conditions under which it can be revoked or superseded.

This structure enables regulatory composition. A defense engagement-authorization configuration may carry a credential issued by a national rules-of-engagement authority, composed with a coalition-policy credential issued by a multinational command, composed with a theater-specific credential issued by an operational authority. The composition is explicit: each credential contributes a constraint layer, and the effective parameter set is the intersection of constraints across the composition. A constraint introduced by any layer narrows the operational envelope; no layer can expand the envelope beyond what an upstream layer authorizes.

The same mechanism applies in therapeutic and companion domains. A therapeutic configuration may compose a clinical-board credential, a jurisdictional licensing credential, an institutional protocol credential, and a per-patient consent credential. Each credential narrows the admissible action space. The composition is verifiable: an auditor can examine the credential chain and reconstruct the effective parameter set without re-deriving the configuration from scratch.

Domain-specific authority taxonomies determine which credentials can issue which parameter classes. A clinical authority can issue intervention-threshold credentials but cannot issue lethal-force authorization credentials. A defense authority can issue engagement-rule credentials but cannot issue therapeutic-alliance parameters. The taxonomy prevents authority confusion across domains while permitting legitimate cross-domain composition where it is structurally appropriate — for example, when a companion AI deployed in a therapeutic context inherits constraints from both the companion-domain authority and the clinical-domain authority.

Composite admissibility weights and the cross-domain governance surface

Within each parameterized configuration, admissibility decisions are governed by composite weights that combine evidence from multiple primitive channels. The composite weight is a parameterized function — typically a weighted sum, a multiplicative aggregator, or a min-aggregator depending on domain — over per-channel scores. Autonomous vehicles favor min-aggregation for safety-critical decisions (every channel must clear its threshold). Defense favors quorum-style aggregation for lethal authorization. Companion AI favors weighted aggregation for relational decisions where partial evidence can support graded responses. Therapeutic systems favor min-aggregation for interventional decisions but weighted aggregation for exploratory responses.

The composite admissibility weight is itself a parameter, not a hard-coded function. This means the aggregation rule can be tuned, audited, and updated without modifying the underlying primitives. A defense system that needs to add a new admissibility channel — for example, a civilian-presence estimator — can extend the composite weight to include the new channel and adjust other channel weights accordingly. The primitives remain unchanged. The configuration absorbs the new channel as a parameter-level extension.

Cross-domain governance becomes possible because the composite admissibility weight is a shared mathematical object across all domains. A regulator examining a therapeutic deployment and a defense deployment can compare their composite weights using the same analytic vocabulary: channel cardinality, aggregation function, threshold structure, asymmetric cost terms. Lessons learned in one domain — for example, that adding an asymmetric cost term to the false-positive side of a confidence aggregator reduces a class of harmful errors — can be expressed as a parameter-level recommendation that any domain can evaluate against its own envelope.

Alternative embodiments

The architecture admits alternative embodiments along several axes without requiring modification of the primitive set. In one embodiment, the parameterization engine operates as a static compile-time artifact: domain configurations are compiled into immutable agent images that cannot be reparameterized at runtime. This embodiment is preferred in safety-critical deployments where configuration drift is unacceptable. In a second embodiment, the parameterization engine operates as a runtime authority that can update parameter values in response to credentialed mutation requests, with mutations gated by the same admissibility primitives that gate other privileged actions. This embodiment supports adaptive deployments in which operational envelopes expand with demonstrated competence.

A third embodiment factors the parameterization engine across a hierarchy of authorities: a base configuration is issued by a primary domain authority, refined by a secondary operational authority, and further refined by a per-deployment authority. This hierarchical embodiment maps naturally onto regulatory composition, with each authority operating within its delegated scope. A fourth embodiment supports cross-domain agent composition: a single agent instance may carry parameter sets for multiple domains and switch between them under explicit authority gates, enabling multi-mission platforms whose domain context shifts with operational tasking.

The primitives themselves admit implementation variants. Affect modulation may be implemented as a learned policy with parameterized constraints, as a rule-based policy with parameterized thresholds, or as a hybrid in which a learned core is wrapped in parameterized admissibility envelopes. Confidence governance may be implemented over Bayesian, Dempster–Shafer, or possibilistic uncertainty representations. Capability awareness may be implemented as a static affordance schema, a learned dynamics model, or a digital-twin simulation. The architectural commitment is to the primitive interface and its parameter surface, not to any specific implementation strategy.

Prior-art distinctions and disclosure scope

This architecture is structurally distinct from foundation-model approaches that aim to produce a single general-purpose model adaptable to any domain through prompting or fine-tuning. Foundation-model approaches lack a credentialed configuration surface, lack regulatory composition, and lack the structural separation between primitives and parameters that makes domain-specific governance auditable. The architecture is equally distinct from domain-specific stacks that build a separate system per domain; those stacks lack the cross-domain governance surface and cannot transport governance insights between deployments. It is also distinct from generic policy frameworks that apply the same policy language across domains without primitive-level parameterization; those frameworks operate at a single level of abstraction and cannot support the engineering-envelope checks that make configurations statically verifiable.

The disclosure scope of this architecture covers: the fixed primitive set (affect modulation, integrity tracking, confidence governance, capability awareness, forecasting, inference control); the parameterization engine that produces domain configurations from credentialed authorities; the composite admissibility weight as a parameter-level governance object; the regulatory composition mechanism that combines credentials from multiple authorities; the engineering envelope that makes configurations statically verifiable; and the cross-domain governance surface that enables transport of governance insights between domains. Implementation choices — specific learning algorithms, specific uncertainty representations, specific credentialing protocols — are within the disclosure scope as alternative embodiments but are not load-bearing for the architectural claim.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01