Hallucination Prevention via Structural Starvation
by Nick Clark | Published March 27, 2026
Skill chains assembled from language-model-derived components are checked structurally at composition time for starvation conditions - configurations in which a single skill consumes all available evidence and leaves its peers unable to validate, contradict, or amend. Starvation is detected before the chain is ever dispatched, so a chain that would silently authorise hallucinated content cannot be compiled, let alone executed.
Mechanism
A skill chain in the cognition architecture is a directed graph of declared skills, each of which consumes some subset of the evidence on the agent's canonical fields and produces some subset of the candidate transitions. Every skill declares, in its policy manifest, the evidence classes it consumes, the evidence classes it leaves available to downstream skills, and the transition classes it may emit. The composition stage takes a candidate chain and computes, statically, the evidence flow across the graph: for each skill, what evidence remains for any peer or successor that runs after it.
Structural starvation is the condition in which a skill's consumption pattern leaves no evidence for any peer that is supposed to validate, contradict, or amend its proposals. A chain in which a single language-model-driven skill consumes all factual evidence and emits a transition that no downstream verifier can check against an independent source is a starved chain. The risk is not theoretical. Such a chain is exactly the configuration that produces hallucinated outputs that survive into committed state, because the verifier has no material on which to base a refusal: the proposing skill has eaten the table.
The check is performed at compile time, not at runtime. The composer reads each skill's manifest, walks the graph, and computes the residual evidence at every node. If at any node the residual evidence is empty for a peer that the policy reference designates as a verifier for the proposing skill's transition class, the chain is rejected at composition. A rejected chain does not become a runtime artefact at all - it is not dispatched, not partially executed, and not subject to a runtime fallback. Compile-time rejection is the structural property that distinguishes starvation prevention from runtime monitoring; runtime monitoring can only observe what has already begun to happen, while compile-time rejection eliminates the configuration before any cycle is spent on it.
Operating Parameters
The mechanism is parameterised along four axes. The first is the evidence-class taxonomy. The policy reference enumerates the classes of evidence the agent recognises - factual claims, relational claims, procedural claims, normative claims, and so forth - and each skill's manifest declares its consumption and emission against this taxonomy. The taxonomy is closed at agent instantiation; skills cannot invent new evidence classes at runtime to evade the check.
The second axis is the verifier mapping. For each transition class the policy reference declares which evidence classes a downstream verifier requires. The composer uses this mapping to determine, for any node in the chain, whether a peer that is supposed to verify a proposing skill's emission has the evidence it needs to do so. A starved chain is one in which the mapping cannot be satisfied at composition.
The third axis is the redundancy requirement. The policy reference declares, per transition class, the minimum number of independent verifiers a chain must contain. Independence is structural - two verifiers that consume the same evidence from the same source are not independent for the purpose of the requirement. The composer enforces the redundancy at compile time, rejecting any chain that fails the count regardless of how plausible the chain otherwise appears.
The fourth axis is the LLM-skill discipline. Skills whose proposals originate from a language model are subject to a stricter form of the starvation check: such skills must always be paired with at least one verifier whose evidence trace does not pass through any language-model-derived skill in the same chain. This prevents the degenerate case in which a language model both proposes and verifies and the chain trivially closes without any external grounding.
A fifth parameter governs the residual-evidence floor. For each verifier node the policy reference may declare a minimum residual evidence size below which the verifier is regarded as starved even if some evidence remains. A trivially small residual is not, in practice, sufficient to support a verification: a verifier that receives a single field where the policy reference expects a class of fields is in the same structural posture as one that receives nothing. The floor is enumerated per evidence class so that, for example, a factual verifier may require multiple independent factual fields while a procedural verifier may be satisfied with one. The composer applies the floor at the same point at which it applies the verifier mapping, and a chain that fails the floor is rejected for the same reason that a chain failing the mapping is rejected.
Alternative Embodiments
In a multi-LLM embodiment several language models contribute proposals to the same chain, and the composer enforces that no single model's output is the sole evidence source for any verifier. In an embodiment with declarative external sources - retrieval-augmented databases, sensor feeds, attested ledgers - the composer treats each external source as an evidence node and the redundancy requirement is satisfied by independence among external sources rather than among internal skills. In a regulated embodiment the composer emits a structured certificate alongside the accepted chain, attesting that the starvation check was performed and listing the residual evidence at every node; the certificate is admitted into the agent's lineage so that an auditor can reproduce the check.
The check may also be embodied as a separate static analyser that runs ahead of dispatch in environments where the composer is not under deployer control - for example, when the chain is supplied by a third-party plugin. In such embodiments the analyser refuses to register any chain that fails the check, with the same structural effect: a starved chain never becomes executable.
A further embodiment treats the starvation check as a development-time linter as well as a runtime composer gate. In the development embodiment the check runs over every chain template the developer authors and emits a structured diagnostic naming the offending node, the missing evidence class, and the verifier mapping clause that the chain failed to satisfy. The diagnostic is sufficient for the developer to repair the chain without having to invoke the runtime, and because the same check runs in both contexts a chain that passes development cannot regress at composition time. This embodiment compresses the feedback loop for safe authorship of skill chains and is particularly relevant in deployments that integrate skills from multiple suppliers.
Composition
The starvation check composes with the three-engine pipeline (mutation, validation, arbitration), the state-schema, and the trust-slope mechanism. Because the check is performed before any candidate state is constructed, it sits upstream of mutation: the mutation engine is never invited to merge a proposal whose chain would have failed the check. The validation engine consequently spends its budget on chains that have already been shown to be structurally non-degenerate, which is the correct division of labour - the validation engine is for content correctness, the starvation check is for structural sufficiency.
The check also composes with the state-schema and the trust slope. A chain that would have been admissible in the steady phase may be inadmissible in fade or in a low-slope condition because the policy reference tightens the redundancy requirement under those circumstances. The composer reads the current phase and the current slope at the moment of composition and applies the matching parameter set. The structural property is preserved across all combinations: a chain that fails the check is not dispatched.
Distinction from Prior Art
Prior approaches to hallucination control fall into two camps. Output filters and fact-checkers operate at runtime on completed model outputs and either suppress or annotate suspected hallucinations after they have been generated. Retrieval augmentation supplies an external evidence stream to a model in the hope that the model will ground its outputs against that stream. Neither approach examines the structural sufficiency of the surrounding chain. A retrieval-augmented model whose chain contains no independent verifier - because the verifier itself depends on the same retrieval - is, in structural terms, starved, and the hallucination it produces is not detectable by any runtime check that does not already know what the truth is.
The starvation check differs in kind. It does not classify outputs, does not consult a fact base, and does not score plausibility. It examines the graph of evidence flow at composition and refuses configurations in which the proposing skill has consumed the verifier's table. The structural property - that a degenerate chain cannot be compiled - is unavailable to runtime classifiers and to retrieval augmentation alone, regardless of how either is tuned.
A third family of prior art - chain-of-thought self-consistency techniques - asks a single language model to produce multiple reasoning traces and then chooses the modal answer. Self-consistency does not address starvation: it multiplies the proposing voice without adding any independent verifier, so a starved chain remains starved when it is replicated. The structural starvation check rejects such configurations regardless of how many traces the proposer is willing to generate, because the check is on the evidence graph, not on the multiplicity of the proposer.
Disclosure Scope
This disclosure covers the closed evidence-class taxonomy, the per-skill manifest declaring consumption and emission, the verifier mapping that ties transition classes to required evidence classes, the redundancy requirement and its independence predicate, the LLM-skill discipline that forbids self-verification by language-model-derived skills, the compile-time rejection of starved chains, the certificate format that attests to a successful check, and the composition of the check with the three-engine pipeline, the state-schema, and the trust-slope mechanism. The disclosure extends to embodiments with multiple language models, with declarative external sources, and with separate static analysers, provided that a chain that fails the check never becomes executable.
The disclosure further covers the residual-evidence floor and its per-class enumeration, the development-time linting embodiment that emits structured diagnostics naming offending nodes and missing evidence classes, the certificate emission that admits the result of a successful check into the agent's lineage, and the composer's reading of the active state-schema phase and the current trust slope at the moment of composition. It covers the rule that LLM-derived skills must always be paired with at least one verifier whose evidence trace does not pass through any language-model-derived skill in the same chain, the rule that two verifiers consuming the same evidence from the same source are not independent for the purpose of the redundancy requirement, and the rule that the evidence-class taxonomy is closed at agent instantiation so that skills cannot invent new classes at runtime to evade the check. The disclosure embraces multi-supplier deployments in which skill manifests are authored by different parties, regulated deployments in which the certificate is the artefact submitted to the auditor, and constrained-environment deployments in which the analyser runs as a separate process ahead of dispatch, in each case preserving the structural property that a starved chain never becomes executable.