Full-Stack Cognition Architecture for Financial Services

by Nick Clark | Published March 27, 2026 | PDF

Regulated finance is the densest concentration of AI-relevant compliance obligations in the global economy. Securities-records retention, conduct-of-business rules, prudential model-risk requirements, privacy and cyber-resilience mandates, and an emerging body of explicit AI supervisory guidance converge on a single technical demand: every model-mediated client interaction, trading decision, surveillance flag, and risk calculation must be traceable to a credentialed policy, an inspectable model lineage, and an auditable evidence base — and that traceability must hold at the boundary between the firm's AI systems, not just within them. The Adaptive Query stack — execution-platform primitives, cryptographic governance, and integrity-coherence monitoring — is the architecture that makes that demand structural rather than procedural.


Regulatory Framework

The U.S. federal layer begins with SEC Rule 17a-4, which imposes books-and-records preservation requirements on broker-dealers including specific provisions for the retention of electronic communications and order records in non-rewriteable, non-erasable form. FINRA Rules 3110 (supervision), 3120 (supervisory controls), 4511 (general books-and-records), 2210 (communications with the public), and the suitability and Reg BI obligations under FINRA 2111 and SEC Rule 15l-1 govern the substance of advisory and brokerage interactions. The Investment Advisers Act books-and-records rule (Rule 204-2) adds an adviser-side regime. The Gramm-Leach-Bliley Act and its implementing Safeguards and Privacy rules govern non-public personal information; the Bank Secrecy Act and OFAC sanctions regimes govern transaction monitoring and screening.

The prudential layer adds the Federal Reserve's SR 11-7 model-risk-management guidance, the OCC's Heightened Standards under 12 CFR Part 30 Appendix D for large national banks, the FFIEC examination handbooks across IT, BSA/AML, and management, and the Basel III/IV capital and operational-risk frameworks. State-level cyber requirements — most prominently New York DFS 23 NYCRR Part 500, including its 2023 amendments addressing AI and continuous monitoring — impose specific governance, encryption, and incident-reporting obligations. The CFPB's Section 1033 personal-financial-data-rights regime and emerging fair-lending expectations under ECOA layer additional obligations on consumer-finance AI.

The European layer is led by MiFID II (best execution, conduct of business, transaction reporting, recordkeeping under Art. 16 and RTS 6 for algorithmic trading), GDPR (lawful basis, automated-decision-making constraints under Art. 22, data-subject rights), and the Digital Operational Resilience Act (DORA), effective January 2025, which imposes ICT risk management, third-party risk, incident reporting, and operational-resilience-testing obligations across the EU financial sector. The European Banking Authority's discussion paper on machine learning for IRB models and the ECB's expectations on banks' AI governance establish supervisor-level expectations that will harden into binding standards. The EU AI Act classifies certain financial-services use cases — credit scoring and creditworthiness assessment in particular — as high-risk, triggering conformity assessment, post-market monitoring, and human-oversight obligations.

The Architectural Requirement

Read together, these regimes impose four cross-cutting architectural requirements. First, every client-facing or market-facing model output must be governed at generation against the relevant client profile, suitability and best-execution constraints, conduct-of-business rules, and applicable jurisdictional policy — not flagged after the fact by surveillance. Second, every trained model must carry an inspectable lineage that satisfies SR 11-7 model-risk validation, EU AI Act training-data governance, and EBA expectations on machine-learning model justification. Third, the cognitive state of human decision-makers — traders, advisors, underwriters, compliance officers — must be a first-class governance signal, because the regimes increasingly treat human oversight as a substantive control rather than a procedural one and because operational-resilience regimes (DORA, OCC Heightened Standards) treat workforce-driven failures as in-scope incidents. Fourth, the regulatory landscape itself must be a continuously assessed object: new rules, enforcement actions, and supervisor letters must be ingested, evaluated against the firm's activities, and converted into governance configuration without manual reconciliation.

None of these requirements is satisfiable by a tool-by-tool compliance posture. Each is an architectural property of the firm's cognitive infrastructure or it is absent.

Why Procedural Compliance Fails

The dominant compliance posture in financial-services AI is silo-procedural. A wealth-management firm operates one model for portfolio recommendation, a second for risk assessment, a third for compliance screening, a fourth for client communication, and a fifth for surveillance. Each system has its own governance documentation, its own data, its own model-validation file, and its own audit trail. When a regulator examines the firm's AI governance — under SR 11-7, NY DFS 500, DORA, or an SEC sweep — the regulator finds multiple independent frameworks rather than a coherent architecture, and the interactions between systems become governance gaps at the boundary.

The boundary failures are operationally consequential. A recommendation that is individually compliant from the advisory model and individually within risk limits from the risk model may still violate Reg BI or MiFID II suitability when both outputs are evaluated together against the client's full profile. A surveillance alert that is individually below threshold across each monitoring system may exceed threshold across their union. A model that is individually validated under SR 11-7 may interact with an upstream model whose validation envelope it exceeds. Siloed governance cannot detect cross-system governance failures, because no system has the credential to evaluate the composite.

Procedural compliance also fails the recordkeeping regimes at their core requirement. SEC 17a-4 and MiFID II Art. 16 demand not only that records exist but that they be reconstructable to the substantive decision they document. A client recommendation generated by a multi-model pipeline, surveilled by a separate system, ratified by a human advisor under conditions monitored by yet another tool, produces an audit trail whose pieces exist in five systems with no cryptographic binding between them. The reconstruction is an exercise in narrative assembly, not in structural retrieval. DORA's incident-reporting timelines and operational-resilience-testing requirements compound the problem: the firm cannot test the resilience of an integrated cognitive architecture it does not actually have.

The model-risk regimes reach the same impasse. SR 11-7 expects effective challenge, ongoing monitoring, and outcomes analysis at a depth that requires inspectable training lineage. The EU AI Act's training-data governance requirements and the EBA's machine-learning expectations repeat the demand at higher specificity. A model whose training provenance is captured only in the data-science team's notebooks does not meet the standard, however well documented its post-deployment performance.

What the AQ Primitive Provides

The execution-platform layer governs every model output at the point of generation. Advisory recommendations, client communications, portfolio rebalancing suggestions, surveillance evaluations, and trading-assist outputs are all evaluated against the client's complete profile, the applicable conduct rules, the firm's policies, and the relevant jurisdictional policy bundle before generation. Governance is contextual: the same underlying model produces structurally different governed outputs for different clients, jurisdictions, and product wrappers, with each output recorded in lineage against the policy under which it was evaluated. Reg BI suitability, MiFID II appropriateness, and Reg S/EU cross-border distribution constraints become generation-time properties rather than post-hoc surveillance hits.

The cryptographic-governance layer binds policy to execution and execution to record. Credentialed policy bundles flow from compliance and legal authorship through deployment to the generation gate; engagement of a model is admissible only if the credential chain validates and the proposed action falls within the intersection of every layer's authorization. The output lineage is cryptographically tied to the credential under which it was evaluated, satisfying the SEC 17a-4 / MiFID II reconstructability requirement as a structural property and producing the immutable audit artifact that NY DFS 500 and DORA increasingly assume. The same cryptographic primitive supports the books-and-records WORM requirement without depending on the underlying storage layer's properties alone.

Training governance manages the model-risk surface that SR 11-7, the EBA, and the EU AI Act now scrutinize at depth. Regime-aware gradient routing prevents models from over-learning recent market regimes — a structural safeguard against the regime-shift failure mode that procedural validation cannot detect. Provenance tracing connects model behaviors to specific training influences, producing the inspectable lineage that effective challenge under SR 11-7 requires and that EU AI Act conformity assessment expects. Validation teams assess whether observed model behavior is grounded in structurally appropriate training rather than regime-specific memorization, and the evidence is an artifact rather than an assertion.

The integrity-coherence layer monitors human decision-makers as a first-class governance signal. Trading desks receive coherence assessments that detect tilt, revenge trading, and overconfidence cycles; advisory teams receive trajectory assessments that detect the burnout that quietly degrades client-relationship quality and conduct standards; compliance and underwriting functions receive analogous monitoring. When a human's coherence signal indicates disruption, the execution-platform layer increases governance stringency on their AI-assisted tools — a structural integration of human oversight and machine governance that DORA's operational-resilience expectations and OCC Heightened Standards' conduct-of-business expectations both move toward.

Semantic discovery provides continuous regulatory-landscape monitoring. Persistent compliance objects track the rule, enforcement, and supervisory-letter surface for each product, business line, and jurisdiction; new publications are evaluated against existing activities and converted into proposed governance-configuration changes. The compliance function moves from reactive reconciliation to continuous assessment, and the evidence trail satisfies the supervisory expectation that the firm understands its regulatory perimeter at any given moment.

Compliance Mapping

The mapping is direct. SEC 17a-4 and MiFID II Art. 16 / RTS 6 recordkeeping obligations are satisfied by cryptographically-bound output lineage. FINRA 3110/3120 supervision and 2210 communications, Reg BI, and FINRA 2111 suitability are satisfied by execution-platform governance at generation. SR 11-7 and OCC Heightened Standards model-risk obligations are satisfied by training-governance lineage and effective-challenge artifacts. NY DFS 500, GLBA Safeguards, and DORA cyber-and-resilience obligations are satisfied by the cryptographic-governance layer's credentialing and lineage primitives together with integrity-coherence's workforce-resilience signal. GDPR Art. 22 automated-decision-making constraints and EU AI Act high-risk obligations on credit scoring are satisfied by the credentialed-policy and human-oversight integration. EBA machine-learning expectations and Basel III/IV operational-risk capital allocation are supported by the training-governance and integrity-coherence artifacts. BSA/AML and OFAC obligations consume the same execution-platform and lineage primitives at the surveillance gate.

Adoption Pathway

A firm adopts the stack as a set of governance services that wrap rather than replace existing AI tooling. The execution-platform layer goes in first at the highest-risk client-facing surface — typically advisory and surveillance — where the recordkeeping and conduct-rule pressure is most acute and the demonstrable compliance value most immediate. Cryptographic governance follows as the credentialing and lineage substrate that makes the execution-platform layer audit-grade and that satisfies DORA and NY DFS 500 evidentiary expectations. Training governance integrates into the model-risk lifecycle at the next validation cycle, replacing narrative model documentation with structural lineage. Integrity-coherence deploys to trading, advisory, underwriting, and compliance desks as a workforce-resilience and conduct-monitoring service. Semantic discovery feeds compliance and product-development functions with continuous regulatory-landscape assessment.

Each phase produces standalone supervisory value, and each subsequent phase compounds the previous ones. The endpoint is a firm whose AI governance is a single architectural artifact — credentialed policy, governed generation, inspectable lineage, monitored human oversight, continuous regulatory awareness — rather than an assemblage of vendor contracts and validation memos. That artifact is the form supervisors are increasingly explicit in expecting, and it is the form a procedural compliance posture cannot produce.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01