Five primitives governing autonomous execution: cryptographic policy enforcement, capability-bounded action, inference-time admissibility, curriculum-gated unlocking, and confidence-governed suspension. Non-execution is a first-class outcome.
Ethical behavior in autonomous systems cannot be enforced reliably through intent, alignment, or supervision alone. This article presents ethical enforcement as infrastructure — execution and mutation are cryptographically gated by externally governed policy agents. Ethics becomes a precondition of computation rather than a retrospective judgment.
Each candidate inference output is treated as a proposal to a governed object before any commitment occurs. The substrate governs whether a data packet, a user identity, or an AI inference may proceed — without knowing which it is. That is the point.
Most systems assume execution is possible and only discover its limits at runtime. This article introduces a capability-native execution model in which agents determine whether an executable form of an objective can exist before execution begins. Non-execution and deferral become first-class outcomes rather than failures.
Execution is a revocable permission, continuously re-evaluated from the agent's state, the task's demands, and the world's constraints. When confidence drops, action is structurally suspended and the agent shifts into non-executing cognition — forecasting, planning, or inquiry — until conditions justify resumption.
Capabilities are earned, not configured. Progressive unlocking based on validated performance states — the LLM is the proposer; the semantic agent is the authority. Tamper-resistant skill certification applicable across AI, robotics, and clinical systems.