Companion robots and therapeutic AI are not distant markets — they are arriving now, and the companies that will define them need structural IP for affective state, coherence, and behavioral continuity. This is where that IP lives. Nobody else has filed here.
Every serious companion robot — clinical, consumer, or caregiving — needs to model emotion not as a narrative layer but as a deterministic execution primitive that governs pacing, risk tolerance, and behavioral consistency over time. Every therapeutic AI needs to track coherence, log deviation, and maintain integrity across sessions without a human in the loop. The AQ cognitive architecture articles establish prior art for exactly those mechanisms. They are framed as structural control models, not clinical claims — which makes them both patentable and deployable.
Affective state modeled as a deterministic control layer that governs evaluation, pacing, risk tolerance, and promotion thresholds inside semantic agents. Representable, updatable, governed, and auditable as part of execution infrastructure — without granting inference systems authority over execution. The formal basis for emotionally coherent AI companions and clinically grounded therapeutic agents.
Empathy intensity generates deviation pressure. Integrity records deviation in lineage. Self-esteem generates coherence pressure that pushes the system back toward accountable, auditable balance. Together they form one coherence control loop — the structural mechanism that makes autonomous systems governable under real-world affective pressure.
Agents speculate over possible futures without acting, then promote selected branches into governed execution. Forecasting, planning graphs, and executive graphs as foundational primitives for autonomy — enabling scalable coordination in robotics and multi-agent systems without centralized schedulers or prompt chains.
Highly Sensitive People, narcissism, and psychopathy modeled as stable adaptations — coping intercepts that emerge when empathic input remains high for too long relative to affective resilience. The patterns differ not by whether empathy is present, but by where the system steps in to avoid downstream integrity and self-esteem pressure.
Codependency modeled as relational loop-closure under sustained empathic pressure — when a system cannot restore coherence internally and attempts to restore it externally through relationship. Two distinct entrapments with different causes and different repair paths, with direct implications for companion AI architecture.
The anxious–avoidant relationship dynamic modeled as a closed-loop failure in coherence restoration. One partner pursues contact to relieve structural threat; the other withdraws to relieve emotional threat. A semantic starvation loop — with structural implications for how companion systems should handle mismatched coherence needs.
Trauma reframed as intimacy collapse — a structural loss of permission to act from coherence. Trauma, dissociation, and resilience as architectural states that determine whether authentic execution remains possible, and whether deviation remains accountable and recoverable. Directly relevant to therapeutic AI that must navigate relational rupture.
Diagnoses modeled as stable regimes of cognitive architecture under sustained affective modulation — not discrete disorders but phase-shifted states of the same underlying system. A structural lens that enables computational modeling of cognitive disruption without requiring clinical claims.
A structural diagnostic framework that reframes psychiatric conditions as regimes of lost coherence in cognition. When memory-bearing systems generate futures, modulate evaluation through affect, deviate under constraint, and persist across time — disruption becomes diagnosable as architectural misalignment rather than moral or personal defect.
Execution as a revocable permission, continuously re-evaluated from agent state, task demands, and world constraints. When confidence drops, action is structurally suspended. The agent shifts into non-executing cognition until conditions justify resumption. Directly applicable to clinical AI that must pause before irreversible interventions.