Capability-Permission Distinction

by Nick Clark | Published March 27, 2026 | PDF

Capability is what the agent can do; permission is what the agent is allowed to do. The two are independent. Capability lives in the substrate — compute, sensors, actuators, network reach, energy budget, demonstrated skill — and answers a physical question. Permission lives in the policy — authorisation, governance, contractual constraint, regulatory class — and answers a normative one. Conflating them is one of the most common structural failures in autonomous-agent design. This architecture treats both as first-class, independently evaluated dimensions: an action proceeds only when capability and permission concur, and every evaluation is audited.


Mechanism

Each candidate action passes through two structurally separate gates before reaching execution. The capability gate consults the agent's capability registry — a canonical store of substrate-resident facts about what the agent can physically perform — and produces a per-dimension feasibility verdict: compute sufficient, memory sufficient, sensor coverage adequate, actuator reach acceptable, latency tolerance met, demonstrated-skill tier high enough. The permission gate consults the policy reference — the canonical store of governance-credentialed authorisations — and produces an authorisation verdict: this principal, in this context, with these credentials, is permitted to direct this class of action against this target population at this time. The two gates do not consult each other; they consult independent stores and emit independent verdicts.

The execution arbiter accepts an action only when both verdicts are affirmative. A capability-positive but permission-negative action — the agent can do it but is not allowed to — is rejected with a permission-failure observation. A permission-positive but capability-negative action — the agent is allowed to do it but cannot — is rejected with a capability-failure observation. Each rejection is itself a credentialed observation written to lineage, so an auditor can distinguish the two failure classes after the fact rather than guessing whether a non-execution reflected an authorisation problem or a physical one.

Both registries are mutable through credentialed channels. The capability registry is updated by substrate observations — a sensor fault narrows the capability surface; a successful skill re-qualification widens it; a battery state change moves the energy envelope. The policy reference is updated by governance revisions. The two update paths are structurally distinct: substrate facts cannot rewrite policy, and policy revisions cannot fabricate substrate capacity. This invariant is what makes the distinction load-bearing rather than cosmetic.

Operating Parameters

Capability dimensions are parameterised per substrate. A given action class declares the resource dimensions it requires — compute floor, memory floor, sensor channels, actuator classes, latency ceiling, demonstrated-skill tier, energy headroom — and the capability gate matches each requirement against the corresponding registry field. Each dimension is evaluated independently, with a per-dimension verdict, so that the resulting capability decision identifies which specific dimensions blocked the action rather than collapsing to a single boolean.

Permission dimensions are parameterised per policy. An action class declares the principal classes that may direct it, the contexts in which authorisation applies, the credential strengths required, the time bounds during which the authorisation is valid, and the populations the action may target. The permission gate evaluates each dimension independently and returns a per-dimension authorisation verdict so that, again, the failure mode is legible: the principal class was wrong, the context was outside scope, the credential was too weak, the time window had expired, or the target was outside the authorised population.

Both gates emit graded verdicts in addition to boolean ones. A capability verdict may be affirmative-with-margin, affirmative-marginal, or negative; a permission verdict may be affirmative-broad, affirmative-narrow, or negative. The execution arbiter consults the graded verdicts to decide whether to proceed unconditionally, proceed under enhanced monitoring, or refuse. Marginal capability paired with broad permission may proceed under monitoring; narrow permission paired with strong capability may proceed but with restricted action surface. The graded space is parameterised in the policy reference and is reproducible from the ledger.

Alternative Embodiments

A first embodiment evaluates the gates strictly in series — capability first, then permission — short-circuiting on the first negative verdict to save evaluation cost. A second evaluates them in parallel and combines the verdicts at a downstream arbiter; this is preferred when the two evaluations have comparable cost and parallel evaluation reduces decision latency. A third interleaves them through a staged evaluator that asks coarse capability and coarse permission questions first, refining only those dimensions that survive the coarse pass; this is preferred when both registries are large and most candidate actions can be cheaply rejected on coarse grounds.

A fourth embodiment introduces capability projection. Rather than matching against the current registry only, the gate projects the substrate state forward over the action's expected duration — battery decay, thermal accumulation, anticipated sensor occlusion — and evaluates feasibility against the projected envelope rather than the instantaneous one. A fifth embodiment introduces permission projection symmetrically: the gate evaluates whether the authorisation will remain valid through the action's expected duration, accounting for credential expiry, scheduled policy revisions, or context transitions. The combined projecting variant is appropriate for long-horizon actions where instantaneous evaluation would admit an action that becomes infeasible or unauthorised mid-execution.

A sixth embodiment supports negotiated remediation. When an action is rejected on capability grounds, the agent may consult a substrate-broker to acquire additional resources — schedule compute, reserve sensor channels, request a charged battery — and re-evaluate. When rejected on permission grounds, the agent may consult a credential-broker to request scoped authorisation. Both remediation paths are themselves credentialed and audited, and neither bypasses the gate; remediation produces new registry or policy state that the next evaluation reads.

Composition

The capability-permission distinction composes with the wider cognition architecture in three structurally important ways. First, with skill gating: the demonstrated-skill tier is one dimension of capability, so a regression-driven skill downgrade narrows the capability surface immediately and the gate begins refusing actions that the lower tier no longer supports. Second, with operator-intent admissibility: an admitted operator intent passes through both gates before execution, and admissibility does not substitute for either capability or permission. A strongly credentialed operator cannot direct an action the substrate cannot perform, and a substrate that can perform an action does not gain authorisation from operator confidence alone. Third, with the LLM proposal pipeline: a proposal that survives validation and arbitration still passes through the capability and permission gates before mutation commits, so generative outputs cannot bypass either dimension.

The shared substrate across these compositions is the canonical-store discipline. Capability lives in the capability registry; permission lives in the policy reference; intent admissibility lives in the admissibility ledger. Each gate reads its own canonical store and writes its own credentialed observations. The architecture's coherence is a property of the stores, not of pairwise integrations between modules.

Prior-Art Distinction

Conventional autonomous-agent architectures conflate capability and permission. Authorisation systems check whether a principal may direct an action without consulting the substrate; capability planners check whether the substrate can perform an action without consulting authorisation; in many systems a single check stands in for both, treating one as a proxy for the other. This conflation produces predictable failure modes: a robot authorised to perform a task it cannot physically execute, an LLM agent permitted to call a tool whose output it cannot validate, a vehicle directed into a manoeuvre its sensor coverage cannot support. The disclosed mechanism prevents this entire failure class by structural separation — two registries, two gates, two verdict streams, two audit trails — rather than by exhortation that designers remember the distinction.

Where prior systems do separate capability and permission, the separation is typically informal — comments in code, naming conventions, documentation — rather than enforced by independent canonical stores with credentialed update paths. The disclosed mechanism makes the separation enforceable because the gates cannot read each other's stores and the update paths cannot cross. Substrate facts cannot manufacture authorisation, and policy revisions cannot manufacture substrate capacity.

Worked Example

Consider a clinical decision-support agent asked to issue a medication adjustment. The capability gate evaluates substrate dimensions: the agent's demonstrated-skill tier on this medication class is sufficient, its model-grounding sources are available, the latency envelope to the prescribing system is within bounds. Capability verdict: affirmative-with-margin. The permission gate evaluates policy dimensions: the requesting principal is a credentialed clinician, the patient context is within the authorised scope, the credential strength meets the threshold for this medication class, the time window is current. Permission verdict: affirmative-narrow, because the credential is valid only for the specific medication class and not for adjacent ones. The execution arbiter accepts on conjunction, but constrains the action surface to the authorised medication class. The adjustment proceeds; an attempt to extend it to a related medication is refused at the next gate evaluation.

Contrast this with a case in which the substrate develops a fault — the grounding source becomes intermittently unavailable. The capability registry is updated by a credentialed substrate observation, narrowing the capability surface. The next adjustment attempt receives an affirmative-marginal capability verdict; permission remains affirmative-narrow; the arbiter admits the action under enhanced monitoring rather than unconditionally. The clinician sees the graded outcome, can elect to defer, and the lineage records both the substrate fault and the graded admission. No part of this flow depended on the clinician's authority to mask the substrate condition, and no part depended on the substrate's capability to substitute for credentialed authority.

Failure Modes Addressed

Three classes of failure motivate the structural separation. The first is silent over-reach: the agent is authorised, attempts the action, and fails physically — a robotic manipulator authorised to grasp an object whose mass exceeds its actuator envelope, an LLM agent permitted to retrieve from a corpus larger than its context window, a vehicle directed into a manoeuvre its braking distance cannot support. The capability gate prevents this class entirely. The second is silent under-reach: the agent is physically capable but operates conservatively because it lacks structural confidence that authorisation is in place — a system that refuses safe actions for fear they may be unauthorised, producing brittle behaviour at the policy edge. The permission gate, by emitting affirmative verdicts whose graded outcomes are legible, prevents this class.

The third class is credential laundering. Where capability and permission are conflated, a strong credential can effectively manufacture capability — an authorised principal directs an action, the system attempts it, the substrate fails, and the failure is misattributed to authorisation. Conversely, observed capability can effectively manufacture permission — the agent demonstrates it can perform an action, and downstream systems treat the demonstration as warrant for future invocations. The disclosed mechanism prevents both directions of laundering by enforcing canonical-store separation: the substrate cannot rewrite policy, and policy cannot fabricate substrate facts. Each gate's verdict stays in its own lane, and the conjunction at the arbiter is the only place the two converge.

Disclosure Scope

The disclosure covers any embodiment in which capability and permission are evaluated through structurally independent gates against independent canonical stores, with per-dimension verdicts, graded outcomes, credentialed observations recorded for both affirmative and negative outcomes, and an execution arbiter that admits actions only on conjunction. Specific evaluation orderings, projection variants, remediation paths, and graded-verdict shapes are illustrative embodiments. The mechanism applies across embodied robotics, conversational agents, autonomous vehicles, therapeutic systems, and enterprise automation, parameterised per domain through credentialed policy revisions, and extends to any combination of the variants described above so long as the structural separation between substrate facts and policy revisions is preserved.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01