Certification Token Generation

by Nick Clark | Published March 27, 2026 | PDF

Skill activation in this disclosure is gated by cryptographic certification tokens. A skill — meaning any model-driven capability whose output enters governed agent state — does not run, and its outputs are not admitted, unless a valid token is presented. Tokens are scoped to the skill they certify, expirable on a declared lifetime, and revocable at any point in their validity window. The tier at which a token issues governs the scope of capability it confers, so that an LLM with a low-tier token cannot exercise a high-tier skill, and so that revocation cascades cleanly to all skills downstream of a withdrawn certification.


Mechanism

The mechanism interposes a cryptographic certification token between a skill request and skill execution. When a model proposes to invoke a skill — whether to mutate agent state, to issue an external action, or to admit a generated artifact into the canonical fields — the proposal carries a token. The token is verified against the policy reference before any execution occurs. If verification fails, the proposal is rejected at the boundary; the skill never runs; no partial state change is committed.

A certification token is a signed record binding four components: the certified subject (the model or capability instance), the certified scope (the set of skills the token authorizes), the validity window (issuance time and expiry), and the revocation handle (the identifier under which the token can be invalidated before its natural expiry). The token is signed by an issuance authority whose public key is a declared element of the policy reference, so that verification is deterministic and offline-checkable within the validity window.

Issuance is the act of demonstrating capability under controlled conditions and receiving a signed token in return. The demonstration is a structured evaluation — a battery of probes, a graded interaction, a formal assessment — defined per skill in the policy reference. A model that passes the demonstration at a given tier receives a token whose scope is the set of skills authorized at that tier. A model that fails receives no token, and any prior token at that scope is not extended.

Expiry is the lapse of the validity window. After expiry the token no longer verifies; the gated skill no longer runs against that token; a fresh demonstration is required to issue a successor. Expiry is non-negotiable at runtime — there is no privileged path that extends a token past its declared expiry without a re-issuance event recorded in the audit trail.

Revocation is the explicit invalidation of a token before its natural expiry. Revocation is published to a revocation list signed by the issuance authority. Skill-gating verifies tokens against the current revocation list as part of admission, so that a revoked token cannot be replayed even within its validity window. The revocation handle in the token is what the revocation list keys against; revocation is therefore narrowly targetable to a specific token, a specific subject, or a specific scope without disturbing unrelated tokens.

Tier governs scope. The issuance authority defines a tier ladder for the skills it certifies, with each tier corresponding to a scope of capability. Lower tiers authorize narrow, well-bounded skills — text reformatting, summarization, retrieval composition. Middle tiers authorize broader skills involving structured generation and bounded action. Upper tiers authorize skills with larger blast radius — autonomous tool use, persistent state mutation, external action issuance. A token issued at tier K authorizes only the skills declared at tier K and below; it cannot be presented to authorize a skill at tier K+1.

Operating Parameters

Token validity windows are tier-dependent. Lower-tier tokens carry longer windows because their authorized capability is bounded in blast radius and the cost of refresh is high relative to risk. Upper-tier tokens carry shorter windows because the authorized capability is larger and the marginal value of frequent re-demonstration is high. The specific windows are policy-bound and audited; they are not chosen by the holder.

Issuance demonstrations are structured per skill: each skill in the policy reference declares the probe set, the grading rubric, the pass threshold, and the tier at which a passing demonstration issues. The demonstration is reproducible — a recorded demonstration can be replayed for audit — and the grading is deterministic so that issuance decisions are auditable rather than opaque.

Revocation lists are append-only signed records published by the issuance authority. Skill-gating consumers fetch the list on a declared cadence, verify the signature, and check incoming token presentations against the current list. A token whose revocation handle appears on the list fails verification regardless of the validity window. The cadence of list refresh is bounded so that revocation propagates within a declared maximum delay.

Scope is declared as a structured set of skill identifiers. A token's scope field enumerates the skills it authorizes; a presented token is matched against the requested skill at admission. Wildcard or broad scope is constrained by policy — high-tier scopes can be enumerated but cannot be expressed as catch-all matches that would silently authorize newly-defined skills not contemplated at issuance.

Audit records are produced at each lifecycle event: demonstration outcome, token issuance, token presentation and verification result, expiry, and revocation. The audit record carries enough provenance to reconstruct, after the fact, why a particular skill ran or failed to run on a particular request. This is what makes the mechanism auditable in regulated deployments.

Alternative Embodiments

In a single-LLM agent embodiment, a single certification token covers the LLM's authorized scope. The agent presents the token at every skill invocation; the gating layer verifies and admits or rejects. Revocation of the token immediately disables all gated skills, providing an emergency stop for the LLM without requiring code changes or process restarts.

In a multi-LLM ensemble embodiment, each model carries its own token, and skills are gated per-model. A high-tier model can invoke high-tier skills while a lower-tier model in the same ensemble is restricted to lower-tier skills. The gating layer enforces the per-model scope so that ensemble routing cannot be used to launder a request from a low-tier model into a high-tier skill.

In a regulated-domain embodiment — medical decision support, legal document generation, financial advisory — the issuance authority is a domain regulator or its delegate. Demonstrations correspond to credentialing evaluations defined by the regulator. Tokens carry domain-specific scope, and revocation is the regulator's mechanism for withdrawing capability from a model whose post-deployment performance has fallen below standard.

In a marketplace embodiment, third-party model providers obtain certification tokens by passing publisher-defined demonstrations, and skill-gating in the consuming application admits providers based on token presentation. This allows an application to consume models from multiple providers under a uniform admission policy without bespoke integration per provider.

In a federation embodiment, multiple issuance authorities cross-recognize each other's tokens under federation policies. A token issued by authority A is verifiable by a consumer in domain B if the federation policy admits authority A's signing key for the requested scope. Revocation cascades through federation: a token revoked by its issuing authority is rejected by all federation members.

Composition with Other Mechanisms

Certification token gating composes with the broader skill-gating architecture in which model proposals pass through structured admission before affecting agent state. The token is the credential carried with the proposal; the rest of the gating pipeline — mutation evaluation, validation, arbitration — runs only on proposals whose token verifies. Token verification is therefore the first gate in a multi-gate admission, and its failure short-circuits all downstream gates.

The mechanism composes with policy-governed action selection. The tier on a presented token flows downstream into action selection so that high-stakes actions require not merely successful skill execution but demonstrated tier authorization. This separates capability (can the model produce the output) from authorization (is the model permitted to have its output applied), which lets the same model serve multiple deployment contexts with different authorization regimes.

The mechanism composes with credentialed-observation channels. A token-gated skill emits its outputs as credentialed observations carrying the token's tier and scope as provenance metadata. Downstream consumers admit these observations through the same admission interface they admit any other credentialed observation, and policy can constrain admission on tier independently of content.

The mechanism composes with audit and governance reporting. Lifecycle events on tokens — demonstrations, issuances, presentations, expirations, revocations — flow into the same audit substrate as other governance events, producing a unified record across cognitive, identity, and capability governance. This composition is what makes the architecture certifiable at the system level rather than only at the component level.

Prior-Art Distinction

Prior approaches to constraining LLM capability fall into two categories: prompt-level guardrails and output-level filters. Prompt-level guardrails attempt to constrain capability by instructing the model not to perform certain actions. They fail under adversarial input, fail under model drift, and produce no audit trail of capability authorization. Output-level filters attempt to detect and reject prohibited outputs after generation. They fail under novel framings, produce false positives that degrade utility, and similarly produce no auditable authorization decision.

Prior approaches to model authorization — API keys, OAuth scopes, simple capability flags — bind authorization to the calling application or user but do not bind it to demonstrated model capability. A model that has not been evaluated for a given skill can nevertheless be invoked through an API key that authorizes any caller with the key. There is no structural link between what the model has demonstrated it can do and what the calling environment permits it to do.

Prior approaches to model evaluation — benchmarks, leaderboards, certification programs run as one-time events — produce reputation signals but do not produce runtime-verifiable artifacts that gate execution. A benchmark score is a marketing claim, not a token presented at every skill invocation and checkable against a current revocation list.

The mechanism described here is structurally distinct: certification is a cryptographically signed token bound to a demonstrated capability, scoped to a specific skill set, expirable on a declared lifetime, revocable at any point, and verified at every skill invocation as a hard gate. The four properties together — issuance from demonstration, scoped binding, expiry, and revocation — distinguish the mechanism from any of the prior approaches above, none of which combine all four.

Disclosure Scope

The disclosure covers gating skill activation by cryptographic certification tokens that are issued upon demonstrated capability, scoped to specified skill sets, expirable on declared lifetimes, and revocable through a signed revocation list, with tier governing scope so that low-tier tokens cannot authorize high-tier skills. The disclosure includes the issuance demonstration, the validity window, the revocation list, the tier-to-scope mapping, the audit substrate, and the verification gate at the boundary of skill execution.

Embodiments described include single-LLM agents, multi-LLM ensembles, regulated domains, marketplaces, and federation. The mechanism is part of the broader Cognition Patent and is intended to be claimed in coordination with related skill-gating mechanisms — mutation, validation, arbitration, lineage — disclosed in this and companion filings.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01