Elomia's Empathy Resets Every Session
by Nick Clark | Published March 27, 2026
Elomia addressed a real access problem: millions of people need mental health support and cannot get it. The platform provides empathetic, CBT-informed conversation through an AI agent that is available around the clock. But Elomia's empathetic model of each user is reconstructed from prior conversation data rather than maintained as persistent affective state. The agent remembers what was said. It does not remember how it felt about what was said. Resolving this requires affective state as a deterministic control primitive with governed temporal dynamics — the structural shape disclosed under provisional 64/049,409 — that can be composed into Elomia's existing CBT-informed conversational stack without displacing the access surface that is the company's core value.
1. Vendor and Product Reality
Elomia Health, founded in 2019 and operating consumer-facing iOS and Android applications under the Elomia brand, is one of the most widely adopted independent mental wellness chatbots, sitting in the same competitive band as Wysa, Woebot Health, and Youper. The product positions itself as a 24/7 emotional support companion drawing on cognitive behavioral therapy (CBT), dialectical behavior therapy (DBT), and acceptance and commitment therapy (ACT) frameworks, with structured exercises (thought records, mood tracking, gratitude logging), guided meditations, and free-form conversation. The user proposition is access: someone struggling at 2 a.m., or someone unable to afford a therapist, or someone in a region without local mental health infrastructure, can have a supportive conversation immediately.
The architecture is the now-conventional pattern for therapeutic chatbots: a foundation language model conditioned on therapeutic training data and safety guardrails, augmented with a retrieval system over the user's prior sessions, mood-tracking inputs, and structured exercise outputs. Each session begins with a context-construction step that retrieves recent interactions, the latest self-reported mood ratings, and any flagged content from prior sessions, then conditions generation on that retrieved context. Crisis-detection classifiers run alongside generation, triggering escalation to crisis hotline resources when self-harm indicators appear. The product handles real volume — millions of users across the consumer base — and user reviews report it as genuinely helpful for everyday emotional support.
Elomia's strengths within this scope are real. The CBT-informed conversational quality is high. The escalation paths are responsibly designed. The mood-tracking and journaling features give users a self-reflection surface that has independent therapeutic value regardless of the chatbot interaction. The product is a sound implementation of the conversational-mental-wellness category as currently understood.
2. The Architectural Gap
The structural gap appears in the texture of longitudinal care. Elomia's model of the user between sessions is a corpus of retrieved facts: utterances, mood ratings, exercise outputs, timestamps. The model of the user during a session is whatever the language model reconstructs from that retrieved corpus inside the prompt context. There is no persistent affective representation — no quantitative emotional state vector that exists between sessions, evolves over time according to specified dynamics, and conditions the agent's posture independently of whatever happened to be retrieved.
This matters because emotional trajectories have dynamics that factual retrieval cannot capture. A user working through a difficult breakup over six weeks generates an emotional arc with specific shape: initial shock, cycling between anger and sadness, gradual acceptance interrupted by setbacks, slow rebuilding of emotional baseline. A human therapist tracks this arc with a felt sense of where the patient is now, which is informed by but not reducible to what the patient last reported. Elomia enters each session with whatever the retrieval pulled in. If the retrieved subset under-represents the underlying trend, the agent's posture is mis-calibrated to the actual emotional trajectory.
Mood tracking does not close this gap. The user's self-reported mood is an input to a persistent affective model; it is not the model itself. A persistent affective model integrates self-report with conversational signal density, session-frequency patterns, response-latency shifts, lexical affect indicators, and the interaction between multiple emotional dimensions. Without the model as a first-class object, these signals can only enter the system through whatever the retrieval-and-prompt pipeline happens to surface, which is contingent rather than structural.
Elomia cannot patch this from inside its current architecture because the architecture is built around stateless generation conditioned on retrieved text. Adding more retrieval, longer context windows, or summarization passes does not produce a persistent affective state any more than longer EHR notes produce a clinical relationship. The state has to exist as a deterministic computational object with named fields, governed dynamics, and runtime read-write semantics that the conversational layer queries and updates rather than reconstructs.
3. What the AQ Affective-State Primitive Provides
The Adaptive Query affective-state primitive specifies persistent emotional fields per user with deterministic update rules, asymmetric decay constants, governed cross-field coupling, and lineage-recorded transitions. The primitive is technology-neutral: any storage substrate, any update implementation, any modeling choice for the field set. What it fixes is the structural treatment of affect as a control primitive rather than a generation conditioning artifact.
Each user maintains a small set of named, quantified fields chosen for the therapeutic domain — for example distress, vulnerability, engagement, trust, hopefulness, agitation. Each field has its own update rule (which signals raise it, which lower it, with what gain), its own decay constant (acute distress decays in days; underlying vulnerability decays in months; trust is built slowly and damaged rapidly), and defined coupling to other fields (rising vulnerability lowers the threshold at which engagement drops trigger concern; rising trust expands the range of interventions the agent will offer). These dynamics run continuously, not only during active sessions: time elapsed between sessions advances the state through its decay terms, so the agent on session re-entry inherits a model that has evolved rather than one that has merely been retrieved.
The fields are read by the conversational layer as a structured input alongside the retrieved corpus, and the conversational layer's outputs and observed user signals write back into the fields through the published update rules. Crisis-detection thresholds, intervention selection, and conversational posture are conditioned on field state, not solely on retrieved text. Every field transition is signed and lineage-recorded, producing a tamper-evident emotional trajectory that the user, a supervising clinician (where the deployment is clinical), or a future human therapist receiving a referral can review. The primitive is disclosed under USPTO provisional 64/049,409 as a structural condition for affect-aware agents.
4. Composition Pathway
Composition with Elomia preserves the company's differentiated layer entirely. The CBT/DBT/ACT-informed conversational quality, the consumer mobile experience, the exercise library, the mood-tracking UX, the crisis-detection classifiers, the safety guardrails, and the brand-and-distribution relationship with users all stay at Elomia. What is added underneath is the affective-state primitive as substrate. Each user's persistent emotional fields live in a field store that the Elomia application reads at session start, writes to during the session through published update rules, and continues to evolve between sessions through scheduled decay updates.
The integration points are well-defined. Session-start context construction reads field state alongside the existing retrieved corpus and conditions generation on both. Conversational signals — utterance affect classification, response latency, session frequency, exercise completion — feed update-rule inputs. The crisis-detection layer is augmented rather than replaced: existing classifiers continue to run on utterances, but threshold sensitivity is conditioned on field state so that a crisis indicator under elevated vulnerability is treated more aggressively than the same indicator under stable baseline. Mood tracking becomes one signal among several feeding the field store, with self-report appropriately weighted.
The user-visible result is an agent that re-enters each conversation with felt continuity. A user who cancels two sessions while their vulnerability field is elevated returns to find the agent already attentive to that gap — not because of a retrieved log entry that mentioned cancellation, but because the engagement field dropped while vulnerability was elevated, triggering a state transition in the agent's therapeutic posture. A user whose distress has been slowly intensifying over three weeks despite positive self-reports finds that the agent is tracking the trend the self-reports are obscuring. The product feels different at the longitudinal seam, which is precisely where current therapeutic-chatbot products feel thin.
5. Commercial and Licensing Implication
The fitting commercial arrangement is an embedded substrate license. Elomia embeds the AQ affective-state primitive into its consumer applications and, importantly, into a clinical-grade SKU it does not currently address. The consumer SKU benefits from improved retention and engagement metrics — the felt-continuity property is exactly what drives re-engagement in longitudinal wellness products. The clinical SKU, addressable to employee-assistance programs, payer wellness benefits, and integrated behavioral health offerings, becomes credible because the lineage-recorded affective trajectory is the artifact that clinical deployments require for supervision, referral handoff, and outcome measurement.
Pricing aligns with the substrate role: a per-active-user license on the field store and update engine, with a separate enterprise tier for clinical deployments where the lineage record carries supervision and audit obligations. What Elomia gains is a structural answer to the longitudinal-thinness critique that affects the entire therapeutic-chatbot category, plus an addressable wedge into clinical and payer markets that the consumer SKU alone cannot reach. What the user gains is therapeutic continuity that survives the next model upgrade, the next retrieval-system change, and (in the clinical case) a transition from chatbot-only support to a hybrid relationship with a human clinician who can review the same affective trajectory the agent has been operating on. Honest framing — the AQ primitive does not replace Elomia's conversational layer; it gives that layer the persistent emotional model it has been reconstructing each session and never quite getting right.