This article explores how a cognition-native execution platform can simulate mental function—and dysfunction—using structured semantic agents. By modeling thought as a graph of speculative branches constrained by memory, policy, and emotional thresholds, the system reframes delusion, grief, and psychiatric symptoms as structural validator states. This is not metaphorical AI—it’s executable psychiatry.
Modeling Cognitive Function and Dysfunction with Semantic Agents
by Nick Clark, Published May 25, 2025
Introduction: Delusion as a Cognitive Function
In traditional psychiatry, delusion is pathological—defined by false belief. But in a cognition-native execution system, delusion is simply speculation without verification. It is not inherently wrong—it is how foresight is built. Every plan begins as a delusion. Every “what if” is an unverified hypothesis.
Semantic agents in this architecture explicitly separate present state from speculative state. They use Planning Graphs—sandboxed, forward-facing graphs that model possible futures without committing to them. These graphs are slope-bound, emotionally weighted, and policy-validated. They allow agents to simulate, rehearse, and prepare—without polluting active memory.
In this architecture, delusion is not a bug. It’s the foundation of planning.
1. Planning Graphs as Structured Delusion
Planning is a cognitive simulation. The Forecasting Engine constructs a Planning Graph by evaluating the agent’s current intent, memory, policy, and personality parameters. This graph explores possible future branches: “What if I act?” “What if I wait?” “What if this fails?”
Each branch represents a speculative mutation path. If a branch is later verified—by slope validation, memory confirmation, or policy approval—it may be committed. If not, it decays.
This makes delusion functionally useful: a Planning Graph filled with unverified futures gives the agent maneuverability. Planning Graphs let agents reason, delegate, or defer before committing to action. They make foresight computable, bounded, and revocable.
2. Personality and Slope Tolerance
Different agents—and people—don’t plan the same way. Some need near certainty to act. Others will leap with nothing more than a hopeful branch and a weak mutation match.
This is expressed in the personality field, which defines an agent’s slope threshold, speculation depth, delegation preference, and mutation aggressiveness. These traits determine how tolerant the agent is to unverified paths, and how much uncertainty it will allow before pruning or committing.
A cautious agent may require 90% alignment with memory before mutating. An impulsive one may act with 20%. Neither is wrong—they are structurally distinct. This field allows us to model temperament not just as behavior, but as graph-processing style.
3. Dopamine as Validator Modulation
In humans, dopamine is often described as a “reward chemical.” But here, it serves a structural role: it modulates the Planning Graph validator—the slope gate that determines when a speculative thought gets promoted to real state.
In ADHD, dopamine favors novelty, distorting the graph’s reward scoring. Agents abandon valid paths mid-traversal, chasing entropy over resolution. They jump too soon, or abandon too quickly.
In schizophrenia, dopamine inflates speculative branches. A high-reward future thought is misclassified as current state. Hallucination isn’t noise—it’s an over-weighted planning branch that bypassed the containment gate. A thought meant for simulation was validated as real.
Negative symptoms may stem from the opposite validator failure: slope thresholds too strict. Even plausible plans are discarded. The agent refuses to move.
4. Grief and Semantic Dissonance
Grief is the dissonance between a once-valid Planning Graph and a present state that has made it unreachable. The loved one still exists in future branches—the house you would’ve bought, the dinners planned, the child imagined. But the current state invalidates them.
The Forecasting Engine keeps re-evaluating unreachable branches. The agent cannot prune them—not immediately—because they were previously verified. The tension is cognitive, emotional, and structural. Over time, decay functions prune the graph. But the decay is felt. That’s grief.
5. Personality-Informed Recovery and Divergence
Because agents carry affective state and personality, we can model how different minds process loss, failure, or ambiguity. Some agents seek delegation to resolve stuck graphs. Others loop speculative branches endlessly. Some reinforce memory to resolve dissonance. Others suppress state change entirely.
This platform allows us to simulate these reactions—not symbolically, but mechanistically. Emotional traits don’t float outside logic. They are execution modifiers. They shape which branches grow, which ones get cut, and when the agent gives up or tries again.
Conclusion: Executable Psychiatry
This model reframes psychiatry not as a list of symptoms, but as a set of cognitive primitives that can be expressed, validated, and executed. It explains dysfunctions in terms of validator thresholds, slope containment, and planning graph distortion. It treats delusion not as error, but as the beginning of strategy.
This is not AI pretending to be a mind. It’s a system that reasons like one—because it’s built to structure and validate thought, not just act on it.
Whether used to simulate disorders, test therapy scaffolds, or develop traceable neurocognitive agents, the platform supports one unifying idea: that every decision starts as a delusion—and whether it becomes memory or madness depends on what we let through the gate.