The Compliance Case for Cognitive Architecture Under the EU AI Act
by Nick Clark | Published March 27, 2026
The European Union has assembled the most demanding AI governance regime in the world: Regulation (EU) 2024/1689 (the AI Act), the General Data Protection Regulation (GDPR), the Cyber Resilience Act (CRA), the NIS2 Directive, the European Cybersecurity Certification Scheme on Common Criteria (EUCC), the voluntary AI Pact, the General-Purpose AI Code of Practice, the European Union Agency for Cybersecurity (ENISA) AI guidance, and the operational oversight of the European AI Office. Each instrument presupposes that high-risk AI systems possess internal structure that can be interpreted, monitored, intervened upon, and recorded. Current large-language-model architectures lack that structure. They produce token distributions over statistical weights; they have no decision process to oversee, no risk state to manage, and no event sequence to log. The Adaptive Query human-relatable-intelligence primitive supplies the cognitive architecture that makes these requirements architecturally satisfiable rather than aspirationally documented.
Regulatory Framework
The EU AI Act establishes a risk-tiered regime in which Annex III high-risk systems face the most demanding obligations. Article 9 requires a continuous risk management system across the entire lifecycle. Article 10 imposes data and data-governance obligations on training, validation, and testing sets. Article 11 requires technical documentation maintained against the Annex IV template. Article 12 mandates automatic logging of events sufficient for traceability. Article 13 requires that systems be designed and developed to enable deployers to interpret outputs and use them appropriately, including instructions for use that disclose capabilities, limitations, and known or foreseeable risks. Article 14 requires that high-risk systems be designed to be effectively overseen by natural persons during the period in which they are in use, including through human-machine interface tools that enable understanding, monitoring, and intervention. Article 15 requires accuracy, robustness, and cybersecurity appropriate to the intended purpose.
For general-purpose AI models, Articles 51 through 55 add transparency obligations, copyright policy requirements, and, for models with systemic risk, model evaluation, adversarial testing, incident reporting, and cybersecurity protection. The General-Purpose AI Code of Practice published by the AI Office operationalizes these obligations and is the principal vehicle by which providers will demonstrate adequacy before harmonized standards land.
GDPR Articles 5, 22, 25, and 35 layer onto the AI Act for any system processing personal data. Article 22 restricts solely automated decisions producing legal or similarly significant effects and conditions them on safeguards including the right to human intervention and to contest the decision. Article 25 requires data protection by design and by default. Article 35 requires data protection impact assessments where processing is likely to result in a high risk to natural persons; the European Data Protection Board's guidelines treat most Annex III deployments as triggering this obligation.
The Cyber Resilience Act extends product cybersecurity obligations to products with digital elements, including AI-enabled products, requiring vulnerability handling, secure-by-design development, and conformity assessment. NIS2 imposes risk-management and incident-reporting obligations on operators of essential and important entities, many of whom deploy high-risk AI in scope of the Act. The EUCC scheme provides the certification baseline that AI Act Article 42 conformity routes can reference. ENISA's Multilayer Framework for Good Cybersecurity Practices for AI and its threat landscape reports define the technical posture expected of providers and deployers.
The AI Pact, the Code of Practice, and the AI Office's enforcement program collectively signal that documentation alone will not satisfy supervisors. Providers will be asked to demonstrate that the system is designed to be overseen, not merely that an oversight policy exists; that risk is managed continuously, not merely that a risk register has been authored; that logs enable traceability, not merely that logs exist.
Architectural Requirement
The Act and its companion regimes converge on a single architectural premise: the AI system must possess inspectable internal state. Article 13 transparency presupposes that there is something to be transparent about. Article 14 human oversight presupposes that there is a decision process whose progression can be monitored and interrupted. Article 12 logging presupposes that there are events with structure to record. Article 9 continuous risk management presupposes that there is operational state whose risk profile can be evaluated as it changes.
Concretely, the architecture must expose at runtime the agent's confidence in its current task, an integrity signal indicating consistency with prior behavior and declared constraints, a capability assessment indicating what the agent is and is not authorized and competent to do, and an affective or salience state indicating which considerations are currently weighting the decision. These primitives must be readable by oversight tooling, not merely inferable from output text.
The architecture must also expose governance gates as structural elements of the execution cycle. A human-in-the-loop requirement under Article 14 cannot be implemented as an external review queue that observes outputs after they have been produced; it must be a constraint that the agent cannot pass without authorization. The same applies to data-protection constraints under GDPR Article 25, cybersecurity constraints under the CRA and NIS2, and intended-purpose boundaries under Article 13.
Risk management under Article 9 must be operationalized as a property of the agent's runtime, not as a periodic document review. When integrity deviates, when confidence drops, or when capability assessment indicates the agent is operating outside its competence, scope must contract automatically. The risk management system is the agent's behavior under those signals, not the binder describing it.
Logging under Article 12 must record the cognitive state that produced each decision, not merely the input and output. Traceability requires that a reviewer can reconstruct why the agent did what it did, in terms the Act's transparency obligations recognize: capabilities exercised, limitations encountered, risks identified, and oversight applied.
Why Procedural Compliance Fails
The dominant compliance posture for LLM-based high-risk systems is procedural: a model card, a risk assessment, a data governance memo, a logging configuration, a monitoring dashboard, and a human-review queue. Each artifact addresses an Article by reference. None of them provides the structural property the Article presupposes.
Model cards describe behavior from the outside. They do not satisfy Article 13 transparency for a deployer who must interpret a specific output, because they describe aggregate behavior over benchmarks rather than the cognitive state that produced this output. The deployer's instructions for use cannot tell the deployer what the agent currently believes, what it is currently uncertain about, or what constraint is currently binding, because the LLM has no such state to expose.
External review queues do not satisfy Article 14 human oversight. The reviewer observes the output after the decision has been produced. The Act requires the ability to monitor operation and to intervene, which presupposes a decision process that unfolds over time and can be interrupted. An LLM forward pass is not a process the reviewer can monitor; by the time the reviewer sees anything, the decision is complete. GDPR Article 22's right to human intervention is similarly degraded: intervention after the fact is appeal, not oversight.
Periodic risk assessments do not satisfy Article 9 continuous risk management. The Act asks for risk management throughout the lifecycle, including at runtime. A quarterly review with a tabletop exercise cannot detect the moment at which a deployed system begins operating outside its competence or violating an intended-purpose boundary. The CRA and NIS2 vulnerability and incident-handling obligations compound this: an architecture that cannot detect its own deviation cannot report it within the deadlines those instruments impose.
Output logs do not satisfy Article 12 traceability. A log of prompts and completions records what was said but not why. A reviewer reconstructing the decision must infer the reasoning from the text, which is precisely the inference the Act's transparency obligations are designed to make unnecessary. The AI Office's conformity assessments and the Code of Practice's evaluation expectations increasingly require structured records that can be queried and aggregated, not narrative transcripts.
Procedural compliance produces documentation that satisfies a checklist read at a distance and fails examination conducted up close. As the AI Office, national supervisory authorities, and notified bodies move from intake to enforcement, the gap between documented compliance and architectural compliance will determine which deployments survive scrutiny.
What AQ Primitive Provides
The Adaptive Query human-relatable-intelligence primitive supplies cognitive architecture as a runtime substrate for AI agents. The agent maintains explicit cognitive state: a confidence value reflecting estimated reliability for the current task, an integrity signal reflecting consistency with prior behavior and declared constraints, a capability assessment reflecting authorized and competent scope, and an affective or salience state reflecting which considerations are weighting the current decision. These values are inspectable at every step of the execution cycle.
Governance gates are structural elements of the cycle. A human-authorization gate, a data-protection gate, a cybersecurity gate, and an intended-purpose gate are each represented as constraints the agent cannot pass without satisfaction. When a gate fires, the cycle pauses, the cognitive state is exposed to the oversight surface, and the agent waits for resolution. The gate is not advisory. The agent structurally cannot continue without it.
Continuous risk management is implemented through the cognitive state. When integrity deviation rises, the agent's authorized scope contracts automatically. When confidence drops below a threshold for a task class, the agent escalates rather than completes. When capability assessment indicates operation outside competence, the agent halts and exposes the assessment for review. The risk management posture changes in response to the agent's own signals, not in response to a quarterly audit.
Logging is the agent's own state history. Every decision is recorded with the cognitive state that produced it, the gates that fired, the constraints that bound, and the resolution that followed. The record is structured, queryable, and aggregable. Article 12 traceability is a query over the record rather than an inference over a transcript.
Transparency at the deployer interface is supplied by the same state. A deployer reading the agent's instructions for use can see not only the agent's general capabilities and limitations but the agent's current confidence, current integrity signal, current capability scope, and current binding constraints. Article 13 is satisfied at runtime, not only in the manual.
Compliance Mapping
Article 9 continuous risk management maps onto the integrity, confidence, and capability primitives and the automatic scope contraction they trigger. The risk management system is the agent's runtime behavior under those signals; the documentation describes that behavior rather than substituting for it.
Article 10 data governance maps onto the data-protection gate and the structured record of which data sources were consulted under which authorization. GDPR Article 25 data-protection-by-design is satisfied by the same gate; Article 35 DPIAs are produced from the structured record.
Article 11 technical documentation, against the Annex IV template, is generated from the structural specification of the cognitive architecture rather than authored separately. Article 12 logging is the cognitive state history. Article 13 transparency is the inspectable runtime state. Article 14 human oversight is the governance gates and the oversight surface they expose. Article 15 accuracy, robustness, and cybersecurity is supported by the integrity primitive and the cybersecurity gate, which align with EUCC certification routes and CRA vulnerability-handling obligations.
For general-purpose AI obligations under Articles 51 through 55, the primitive supplies the structured evaluation record, the incident detection and reporting pipeline, and the cybersecurity posture that the Code of Practice operationalizes. ENISA AI guidance maps onto the integrity and capability primitives and the cybersecurity gate. NIS2 risk-management and incident-reporting obligations are supported by the same incident pipeline. GDPR Article 22 is satisfied by the human-authorization gate, which makes solely-automated decisions structurally impossible for in-scope decision classes.
Adoption Pathway
Adoption begins with instrumentation: providers wrap their existing inference path with the cognitive architecture, exposing confidence, integrity, capability, and affective state to oversight tooling and introducing the first governance gates for the highest-stakes decision classes. The first conformity assessment cycle uses the structural record as primary evidence and the procedural artifacts as secondary.
Subsequent cycles extend the gates to cover the full Annex III decision surface, integrate the cognitive state with deployer interfaces under Article 13, and federate logging across deployments for AI Office and national supervisory authority access. The AI Pact participation and Code of Practice adherence are demonstrated through the structural record rather than narrative attestation.
Mature deployment integrates the architecture with EUCC certification, CRA conformity assessment, and NIS2 incident-reporting pipelines, so that a single structural substrate supports the full stack of EU obligations. At that point, the cognitive architecture has delivered what the regulatory framework has always presupposed: AI systems with internal structure that can be interpreted, overseen, managed, and recorded as a property of how they operate, not as documentation about how they are claimed to operate.