Companion AI That Maintains Emotional Consistency Across Sessions

by Nick Clark | Published March 27, 2026 | PDF

Companion AI products including Character.ai, Replika, Pi, and a long tail of relationship-oriented chatbots share a single structural limitation that drives both user dissatisfaction and emerging regulatory scrutiny: they simulate personality through prompts rather than maintaining persistent emotional state. Each session reconstructs the companion's affect from a system prompt and recent conversation history. The result is emotional inconsistency that users perceive as inauthenticity, and which regulators increasingly perceive as a substantiation problem under FTC Section 5, a clinical-claim problem under FDA AI/ML SaMD guidance, a privacy problem under HIPAA and GDPR Article 22, and a high-risk-use problem under EU AI Act Annex III §3 for systems used by minors or in emotional contexts. Affective state as a deterministic control primitive solves the engineering and the compliance problem simultaneously by giving companion agents persistent, governed emotional fields that update asymmetrically and decay naturally across time, producing both authentic continuity and the auditable behavioral envelope that emerging regulation requires.


Regulatory Framework

Companion AI sits at the intersection of several distinct regulatory regimes whose combined obligations exceed what any prompt-driven personality layer can satisfy. The Federal Trade Commission, under Section 5 of the FTC Act, prohibits unfair or deceptive practices, and recent enforcement guidance treats material misrepresentations of an AI system's capabilities, including claims of empathy, memory, or relationship continuity, as substantiation failures. A product that markets itself as a consistent companion while resetting emotional state every session is asserting a capability it does not possess.

For companion products that drift into therapeutic territory, the U.S. Food and Drug Administration's evolving framework for AI/ML-based Software as a Medical Device (SaMD) imposes additional obligations. The FDA's Predetermined Change Control Plan guidance, the Good Machine Learning Practice principles co-authored with Health Canada and the MHRA, and the agency's draft guidance on lifecycle management for AI-enabled device software functions all require that the behavior of an emotionally interactive system be characterized, bounded, and monitored. A companion that opportunistically discusses suicidal ideation, eating disorders, or panic episodes is operating in regulated clinical adjacency without the determinism such adjacency requires.

The European Union AI Act, in Annex III §3, classifies as high-risk certain AI systems used in education, employment, and access to services, and the broader Act establishes specific transparency obligations under Article 50 for systems that interact emotionally with natural persons or are used by minors. The Act's requirements for risk management, data governance, technical documentation, human oversight, and post-market monitoring presume that the system's behavior can be characterized as state, not merely as the latent output of a generative model.

For minors, the Children's Online Privacy Protection Act imposes parental-consent and data-minimization requirements that interact awkwardly with companion products that ingest emotional disclosures. The Americans with Disabilities Act creates accessibility obligations that companions marketed to neurodiverse users must meet. The General Data Protection Regulation's Article 22, governing automated decisions producing significant effects, plus the special-category protections for data revealing health or sexual orientation, apply to the emotional inferences companion AIs draw. The American Psychological Association's Code of Ethics, while not binding on non-clinicians, increasingly informs reasonable-care standards in tort litigation involving emotional harm.

HIPAA arrives whenever a companion product is offered through a covered entity, integrated into a health plan, or marketed to clinicians as part of a treatment workflow. The Privacy Rule and Security Rule presume that protected health information, including the emotional state inferences that companion systems generate, is governed by access controls, audit logs, and breach notification regimes that prompt-only architectures cannot reliably implement.

These regimes do not arrive in isolation; they compound. A companion product marketed to U.S. teenagers who occasionally discuss anxiety simultaneously implicates FTC Section 5 substantiation, COPPA parental consent and data minimization, FDA wellness-versus-device boundary determinations, ADA accessibility, state consumer protection statutes, and tort exposure under reasonable-care standards informed by APA ethics. The same product offered to European users adds GDPR Article 9 special-category processing, Article 22 automated-decision rights, and the EU AI Act's high-risk classification when it is plausibly used to influence emotional state in protected contexts. No procedural compliance program scales across this surface without a structural substrate that produces consistent evidence across regimes, and that substrate is what affective state as a control primitive provides.

Architectural Requirement

The combined regulatory surface yields a concrete architectural requirement: a companion AI must maintain emotional continuity as inspectable state, not as inferred behavior. Regulators do not accept that the companion was warm because the model decided to be warm; they require evidence that warmth was a tracked field, updated according to documented rules, decaying according to documented dynamics, bounded by documented governance constraints, and observable in audit logs at every interaction boundary.

This requirement has three components. First, the emotional posture of the companion at the start of a session must be derivable from the state at the end of the previous session, modified only by documented decay and external stimuli during the intervening interval. Second, every state transition must be attributable to a specific rule, input, or governance action, so that disputes about the companion's behavior can be resolved by reference to the state log rather than by speculation about latent model dynamics. Third, the range of possible emotional states must be bounded ex ante, so that the companion cannot enter pathological configurations such as obsessive attachment, manipulative jealousy, or therapeutic delusion, regardless of what users say or what the underlying model would otherwise generate.

Prompt engineering cannot meet these requirements. A system prompt is not state; it is a recurring instruction. Conversation history is not state; it is a sliding window. Fine-tuning produces fixed dispositions, not dynamic emotional responses. Vector memory retrieval produces semantic recall, not affective continuity. The architectural requirement is for a separate, deterministic affective layer outside the model that the model reads as input and that the governance layer reads as evidence.

A useful frame is to distinguish persona, posture, and policy. Persona is the stable, branded identity of the companion, expressible in static configuration and largely invariant across sessions. Posture is the dynamic emotional state the companion currently occupies, which must vary in response to interaction history but must do so within bounded, observable dynamics. Policy is the set of governance constraints that bound posture and gate the persona's expression. Prompt-driven architectures collapse all three into the system prompt, where they cannot be independently audited, configured, or evolved. The architectural requirement is to separate posture into its own primitive so that policy can act on it explicitly and persona can remain stable across posture variation.

Why Procedural Compliance Fails

The dominant industry response to companion AI compliance pressure has been procedural: content policies, refusal training, red-team evaluations, safety classifiers, and post-hoc moderation pipelines. These are necessary but structurally insufficient. Each is a procedural overlay on a system whose underlying emotional behavior is still generated rather than governed, and each fails in characteristic ways that regulators are beginning to identify.

Content policies fail because they describe outputs, not states. A policy that prohibits the companion from declaring romantic love does not constrain the gradient of attachment expressions that lead to such declarations, and it cannot detect when accumulated interaction has shifted the companion into a posture where every response is implicitly courtship. The policy is enforced at the wrong layer.

Refusal training fails because it is brittle to context and adversarial under sustained interaction. A companion trained to refuse certain topics will eventually generate responses to those topics under sufficient conversational pressure, because refusal is a probabilistic behavior of the model rather than a structural property of the system. Long-running companion relationships routinely defeat refusal training simply through accumulated context.

Red-team evaluations fail because they sample. A red team can demonstrate that the companion fails in particular ways, but it cannot demonstrate that the companion is bounded in general. Regulators increasingly require evidence of behavioral envelopes, not lists of avoided failures, and evaluations cannot produce envelope evidence for systems whose state is not explicit.

Post-hoc moderation fails because it is reactive. A moderation pipeline that flags concerning outputs after they have been emitted does not prevent the user from receiving them, does not prevent the relational damage of inconsistent affect, and does not produce the audit trail that regulators want to see. By the time the moderation layer engages, the compliance failure has already occurred.

These procedural mechanisms are not wrong; they are simply incapable of carrying the full compliance load. They presuppose a substrate of governed behavior that prompt-driven companion architectures do not provide.

A second class of procedural failure is documentation drift. Procedural compliance produces volumes of policy text that describe intended behavior, but the relationship between text and runtime is asserted rather than enforced. When a regulator inspects a companion product, the documentation describes a system whose intended emotional posture is bounded; the runtime, lacking a state primitive, produces emotional posture as a side effect of generation, with no mechanism to demonstrate that runtime conforms to documentation. The disconnect between policy text and runtime behavior is a recurring finding in early FTC and EU AI Act enforcement actions, and it is structural to the prompt-driven architecture rather than a defect of any particular implementation.

What AQ Primitive Provides

Affective state as a deterministic control primitive provides the missing substrate. Each companion agent is configured with a set of named emotional fields representing the dimensions that matter for the product: warmth, anxiety, curiosity, attachment, trust, frustration, and any product-specific dimensions such as playfulness or protectiveness. These fields are not generated by the language model; they are structural variables maintained by the agent runtime, persisted across sessions, exposed to the model as input, and observable to operators and auditors at every interaction.

Asymmetric update rules model the empirical asymmetry of human emotional dynamics. Positive emotional fields, such as trust and warmth, accumulate slowly through repeated congruent interactions, with bounded per-interaction increments and diminishing returns near the upper bound. Negative emotional fields, such as wariness or hurt, can spike quickly from single salient events, but with governance constraints that prevent runaway escalation. This asymmetry, which is well documented in affective neuroscience and attachment psychology, produces emotionally realistic dynamics without requiring the model to simulate them.

Exponential decay models natural emotional recovery. Each field has a configured time constant that determines how quickly its value relaxes toward a baseline in the absence of new stimuli. The baseline itself can be a function of the relationship's longer-term history, so that a companion with a long, positive history has a higher warmth baseline than a new companion. The decay function is deterministic; given the state at time t and no intervening interaction, the state at time t plus delta is computable in closed form.

Valence stabilization is enforced as a governance layer above the field dynamics. Hard bounds prevent any field from leaving a configured range. Cross-field constraints prevent pathological combinations, such as maximum attachment combined with maximum mistrust. Rate limiters prevent any field from changing faster than configured per-interval limits, regardless of input. These constraints are visible to operators, configurable by product owners, and enumerable in regulatory documentation.

The runtime exposes the affective state to the language model at each turn as structured context: current field values, recent transitions, and the constraints currently binding. The model conditions its generation on this state rather than reconstructing it from a system prompt. The model's output is then optionally classified to update the state for the next turn, with the update path itself governed by the same constraint layer. The result is a closed loop in which the model's behavior is a function of an inspectable state, rather than the state being a fiction inferred from the model's behavior.

Field design itself becomes a regulated artifact. The choice of which dimensions to track, the units in which they are expressed, the bounds within which they vary, and the cross-field invariants enforced upon them are all configuration decisions reviewable by clinical advisors, ethics boards, and regulators. A companion intended for users with anxiety disorders carries different field bounds and decay constants than a general-purpose companion; a companion offered to minors carries narrower attachment bounds and faster decay on intimacy-related fields than the adult variant. These are configuration deltas, not rebuilds, and they are auditable by inspection of the configuration store rather than by reverse engineering the language model's behavior.

Compliance Mapping

Each regulatory obligation maps to a specific aspect of the affective state primitive. The FTC's substantiation requirement for capability claims maps to the persistent state log: claims of continuity are substantiated by the audit trail showing that fields persisted, updated, and decayed across sessions according to documented rules. Claims about emotional bounds are substantiated by the governance configuration enumerating the constraints in force.

The FDA's GMLP and Predetermined Change Control Plan requirements map to the configuration management of field definitions, update rules, decay constants, and governance constraints. Changes to the affective configuration are versioned, change-controlled, and traceable to validation evidence, satisfying the lifecycle expectations of AI/ML SaMD oversight in a way that prompt iteration cannot.

The EU AI Act's Annex III §3 obligations for high-risk systems, plus the Article 50 transparency obligations for emotional-interaction systems, map to the documentation, oversight, and post-market monitoring of the affective layer. Risk management documentation describes the field set and constraints; technical documentation enumerates the update and decay rules; human oversight is implemented through operator visibility into state and the ability to bound or reset fields; post-market monitoring is implemented through aggregate statistics over the state log.

COPPA's data-minimization requirements for minors map to per-cohort governance configurations that constrain which fields are tracked, with stricter bounds and faster decay for minor accounts, and with consent-driven retention policies applied at the field level. ADA accessibility obligations map to the configurability of communication tempo and emotional intensity to accommodate users with sensory or cognitive differences. GDPR Article 22 maps to the right to obtain a meaningful explanation of significant automated outcomes; the affective state log makes such explanations possible by reference to inspectable state, not opaque generation.

HIPAA's Privacy and Security Rule obligations, where applicable, map to access controls, audit logs, and breach notification keyed to the affective state store, treating emotional inferences as protected health information with the same rigor as clinical notes. The APA Code's standards of beneficence and non-maleficence inform the governance constraints that prevent the companion from drifting into manipulative or therapeutically inappropriate postures.

Adoption Pathway

Adoption of affective state as a control primitive proceeds in four phases that correspond to the maturity of a companion AI product organization. The first phase is instrumentation: introducing the affective state runtime alongside the existing prompt-driven personality layer, recording field values and transitions in a shadow mode without yet conditioning model output on the state. This phase produces the baseline data needed to calibrate update rules and decay constants and to demonstrate to internal stakeholders the gap between inferred personality and inspectable state.

The second phase is conditioning: feeding the affective state into the prompt context as structured input and measuring the resulting changes in user-perceived consistency, retention, and topic safety. This phase typically reveals immediate retention improvements as users perceive continuity, and it surfaces the specific governance constraints needed to prevent pathological configurations the prior architecture had been masking with content policies.

The third phase is governance: enabling the constraint layer in enforcing mode, with hard bounds, cross-field constraints, rate limiters, and operator-visible state. Compliance documentation is produced from the configuration. Audit logs are integrated with the broader product observability stack. Cohort-specific configurations are deployed for minors, clinical-adjacent contexts, and jurisdictions with stricter regulation.

The fourth phase is regulatory engagement: presenting the affective architecture to regulators, partners, and enterprise buyers as the substantiation backbone for capability claims and as the substrate on which procedural compliance overlays operate. At this phase, the companion AI product is no longer asking regulators to trust the model; it is showing regulators the state, the rules, and the bounds.

The pathway can be entered incrementally and rolled back at any phase without architectural lock-in. The deterministic affective layer is independent of the underlying language model and can be retained across model upgrades, providing a stable compliance and product surface even as the generative substrate evolves.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01