Integrity and Coherence for Government Policy Agents

by Nick Clark | Published March 27, 2026 | PDF

Government deployment of AI policy agents — for benefits adjudication, regulatory guidance, citizen services, statutory analysis, and inter-agency coordination — is governed by a stack of binding instruments: OMB Memorandum M-24-10 on AI use by federal agencies, the NIST AI Risk Management Framework, the EU AI Act's elevated obligations on public authorities, the Council of Europe Framework Convention on AI, FedRAMP for cloud authorization, and constitutional due-process and equal-protection doctrine that no procedural overlay can substitute for. These instruments converge on an architectural demand: the agent must maintain coherent normative positions across departments, equitable behavior across constituencies, and verifiable alignment with statutory and regulatory authority — as structural properties of the deployment, not as policies attached to it. The integrity-and-coherence primitive disclosed under USPTO provisional 64/049,409 supplies this property.


1. Regulatory and Compliance Framework

In the United States, OMB Memorandum M-24-10 (March 2024), implementing Executive Order 14110 and surviving as the operative federal AI governance instrument across administrations through OMB M-25-21 and downstream agency guidance, requires federal agencies to inventory AI use cases, designate Chief AI Officers, and apply specific minimum risk-management practices to "rights-impacting" and "safety-impacting" AI. Rights-impacting includes AI used in benefits eligibility, healthcare access, housing, education, immigration, and law enforcement — that is, the bulk of citizen-facing policy automation. The minimum practices include AI impact assessments, real-world performance testing, ongoing monitoring, public consultation where feasible, notice to affected individuals, plain-language explanations, opt-out where appropriate, and human consideration and remedy for adverse decisions. The NIST AI Risk Management Framework, incorporated by reference in M-24-10 and in numerous state laws, organizes these obligations under govern, map, measure, and manage functions and demands evidence at each.

The EU AI Act treats public-authority deployment of high-risk AI as carrying elevated obligations: Annex III categories include access to public services and benefits, evaluation of eligibility for emergency services, evaluation of creditworthiness when performed by public authorities, law enforcement risk assessment, migration and border control, and administration of justice and democratic processes. Article 27 imposes a fundamental rights impact assessment on public-authority deployers. Article 26 monitoring duties are non-negotiable. The Council of Europe Framework Convention on AI, opened for signature in September 2024, commits state parties to legality, equality and non-discrimination, transparency and oversight, accountability, privacy, and reliability for AI use by public authorities. In the United Kingdom, the Algorithmic Transparency Recording Standard is mandatory for central-government use of algorithmic tools touching the public.

Constitutional and statutory baselines underlie everything. The Administrative Procedure Act requires reasoned decisionmaking; an agency action grounded in an inscrutable model output that is internally inconsistent with another agency's position is vulnerable to arbitrary-and-capricious challenge. Equal Protection and Title VI prohibit disparate treatment and, in many programs, disparate impact across constituencies. Due process under the Fifth and Fourteenth Amendments requires notice and an opportunity to be heard before deprivation of a protected interest — which, when the deprivation is mediated by an AI agent, requires that the basis of the agent's recommendation be reconstructable. State laws layer on: California's Generative AI Accountability Act, Texas's TRAIGA, New York's algorithmic accountability requirements for state agencies, Colorado's AI Act covering consequential decisions. FedRAMP and StateRAMP govern the cloud platforms underneath. The compliance perimeter is dense, and it is converging on coherence as a structural property.

2. Architectural Requirement

Reading across the stack, the architectural demand on a government policy agent has three dimensions. First, normative coherence: the agent must maintain consistent positions on questions of statutory and regulatory interpretation across departments, time, and constituencies. M-24-10's requirement that AI use be consistent with the agency's authorities and applicable law, the APA's reasoned-decisionmaking standard, and the EU AI Act Article 15 lifecycle-consistency obligation all collapse to a structural requirement that the agent's normative positions be observable, comparable, and reconciled when they diverge. Second, relational equity: the agent must treat constituencies equitably as a governed runtime property, not as a training objective. Equal Protection, Title VI, M-24-10's rights-impacting safeguards, Article 27 fundamental-rights impact, and Council of Europe equality commitments all require equitable behavioral output, which in turn requires that the agent's responsiveness, thoroughness, and decision distribution be instrumented and bounded across cohorts. Third, authoritative alignment: the agent must remain aligned with the current statutory and regulatory framework — including changes — and detect when its outputs would contradict authority. The APA, due-process doctrine, Article 14 human oversight, and the M-24-10 ongoing-monitoring obligation collectively require that contradictions with authority be caught architecturally rather than after a citizen has acted on a recommendation that is inconsistent with current law.

3. Why Procedural Compliance Fails

The default federal posture is procedural: an inventory entry, an impact assessment document, a designated Chief AI Officer, a vendor questionnaire, periodic monitoring reports, and a public-facing notice. None of these reaches the architectural demand. The inventory records that an agent exists; it does not constrain what the agent says next. The impact assessment is a snapshot at deployment; the agent's normative positions drift with model updates, prompt changes, and corpus shifts. The Chief AI Officer is a designated human; she cannot read every agent output across every department in real time. The vendor questionnaire is a representation about the model; it is silent on cross-agent coherence, which is a property of the deployment, not the model. Monitoring reports are sampled and lagged; the citizen has already received the inconsistent guidance.

Cross-departmental contradiction is the canonical failure. A Department of Housing agent advises that a particular dwelling configuration qualifies for a subsidy; a Department of Health agent, three weeks later, in a separate adjudication touching the same regulatory intersection, takes a position incompatible with the housing answer. Both agents are internally consistent; the government as a whole is not. Procedural compliance has no surface on which to detect this — each agency runs its own pipeline, its own monitoring, its own audits. The contradiction surfaces, if at all, when a citizen brings a complaint, an inspector general issues a report, or a court vacates an action under the APA. By then, the cost is downstream: lost trust, retrospective remediation, settled litigation, and the political consequence of a government that visibly does not know what it thinks.

Equity at scale is the second canonical failure. Bias testing at deployment, as required by M-24-10 and the NIST AI RMF, measures cohort outcomes on a fixed evaluation set. It does not capture the running distribution of agent thoroughness, response latency, escalation rate, and approval rate across constituencies in production. A monitoring report that aggregates approval rates by cohort may detect gross disparities, but it does not capture interaction-quality variance — the subtle differences in how an agent engages with different communities — and it cannot show, structurally, that equitable treatment is bounded rather than emergent. Constitutional and Title VI claims survive on exactly this kind of evidence, and procedural compliance cannot produce it.

The third failure is regulatory drift. Authority changes — a new rule, a court decision, a guidance update. The agent continues to emit recommendations conditioned on the prior framework. Procedural retraining is slow; pre-publication review is human and lossy. The architecture provides no surface on which contradictions with the now-current authority are detected at the moment of generation. Citizens act on superseded guidance, agencies issue inconsistent decisions, and the APA reasoned-decisionmaking standard is breached without anyone observing the breach until it surfaces in litigation.

4. What the Integrity-and-Coherence Primitive Provides

The integrity-and-coherence primitive disclosed under provisional 64/049,409 specifies a three-domain integrity model — normative, relational, and operational — together with a deviation function that operates structurally on agent outputs. The normative domain carries the agent's positions on questions of interpretation as first-class objects: a position is a tuple of jurisdictional context, statutory or regulatory anchor, interpretive stance, and the lineage of observations that supports it. Two agents in the same jurisdictional framework can compare positions on overlapping questions; the deviation function detects inconsistency and routes it to inter-agency reconciliation before either output is published to a citizen. The relational domain carries the agent's behavioral distribution across cohorts as governed statistics: thoroughness, latency, approval rate, escalation rate, with bounds. The deviation function flags excursions in real time. The operational domain carries the agent's adherence to authority — the current statutory and regulatory framework — and the deviation function detects when a candidate output would contradict authority before the output is emitted.

The primitive is composable hierarchically: per-agent integrity nests within per-agency integrity nests within per-jurisdiction integrity, so the deviation function operates at every level. It is technology-neutral over the underlying model and storage. Every output produces an integrity lineage record — the positions invoked, the bounds in force, the deviation checks performed, the reconciliations triggered. This lineage is the structural artifact that M-24-10 monitoring, EU AI Act Article 26 deployer monitoring, Council of Europe accountability, and APA reasoned-decisionmaking all need and that the conventional architecture cannot produce.

5. Compliance Mapping

M-24-10's minimum practices for rights-impacting AI map directly: the impact assessment becomes a structural document about the integrity bounds in force; ongoing monitoring is mechanized through the deviation function; human consideration and remedy is grounded in lineage records that a reviewer can read and reverse on informed grounds; notice to affected individuals can include the integrity bounds that governed their interaction. NIST AI RMF govern, map, measure, manage functions each receive a structural surface — govern through the bounds, map through the cross-domain integrity model, measure through the deviation statistics, manage through the reconciliation flow. EU AI Act Articles 9, 14, 15, 26, and 27 are each addressed: risk management is structural, oversight is informed, lifecycle consistency is observable, deployer monitoring is mechanized, and the fundamental rights impact assessment has a substrate to point at.

Constitutional and statutory baselines gain a defensible substrate. APA reasoned-decisionmaking is supported by a lineage that shows the agent's normative position, the authorities invoked, and the consistency check performed. Equal Protection and Title VI claims face an evidentiary record showing that relational bounds were equally in force across cohorts. Due-process notice obligations are met because the basis of the recommendation is reconstructable from the lineage. State laws — Colorado AI Act, California GenAI accountability, NYC requirements for state agencies — each receive the same structural evidence base. The Council of Europe Framework Convention's commitments to legality, equality, transparency, accountability, privacy, and reliability are each instantiated as observable properties of the deployment. Inspector general audits, GAO reviews, congressional inquiries, and judicial review all consume the same lineage substrate.

6. Adoption Pathway

Government adoption proceeds through systems integrators and authorized cloud platforms. The integrity-and-coherence primitive is composed underneath an existing agency policy or citizen-services agent — built on Palantir Foundry, Salesforce Public Sector, Microsoft Azure Government, Google Public Sector, AWS GovCloud, or a bespoke stack — so that the existing user-facing agent runs over governed integrity domains rather than over raw model state. The integration vector is well-defined: the agent emits candidate outputs to an integrity gate that checks normative consistency against peer agents in the same jurisdictional framework, relational bounds against the agency's equity configuration, and operational alignment against the current authority corpus, then either passes the output, routes it for reconciliation, or returns it for revision. The lineage is written to the agency's authority of record.

Vendor partners — the systems integrators and platform providers — embed the primitive as a substrate license. The cloud authorization perimeter is FedRAMP High or Moderate as the agency requires, with StateRAMP equivalents at the state level; the substrate inherits the platform's authorization and contributes structural controls that the platform alone cannot supply. Attestation is the closing piece: the substrate produces a conformance attestation naming the integrity bounds in force, the deviation events, the reconciliations performed, and the per-cohort relational statistics for a reporting period. This attestation is consumable by the Chief AI Officer's M-24-10 reporting, by the agency's inspector general, by GAO, by the EU notified body where the agency operates EU-facing services, by the Council of Europe reporting framework, and by the citizen exercising due-process rights. Cross-agency reconciliation flows through inter-agency councils that already exist; the primitive supplies the structural surface those councils have not had. For government deployers, the practical posture is that policy agents move from a procedurally-justified liability into a structurally-governed instrument of public administration, and the cost of compliance becomes the cost of integration rather than the cost of an indefinite litigation and oversight tail.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01