Semantic Kernel Integrated AI Into Enterprise Code. The Agents It Creates Have No Schema.

by Nick Clark | Published March 27, 2026 | PDF

Microsoft's Semantic Kernel made large-language-model integration natural for enterprise developers by providing plugins, planners, and memory connectors that compose cleanly with existing C#, Python, and Java codebases. The SDK treats AI capabilities as native functions that idiomatic enterprise code can call, dependency-inject, unit-test, and ship through normal release pipelines. But the agents Semantic Kernel produces are plugin compositions, not schema-defined objects. Their identity, governance rules, behavioral skills, and memory are loaded cooperatively at runtime from disparate sources, and nothing cryptographically binds those parts together. The gap is between SDK-grade integration and structural agent definition with cryptographic identity.


Vendor and Product Reality

Semantic Kernel is an open-source SDK published by Microsoft under MIT licensing, originally released in early 2023 and now positioned alongside AutoGen and the broader Azure AI Foundry stack as Microsoft's preferred path for building production AI agents in enterprise environments. The SDK is available in C#, Python, and Java; the C# implementation is the most mature and reflects the SDK's strong alignment with .NET dependency-injection, configuration, logging, and hosting conventions. Microsoft positions Semantic Kernel as the "lightweight" complement to AutoGen's multi-agent orchestration, with explicit guidance that the two projects converge over time.

The architectural surface comprises a small number of named primitives. The Kernel is the central orchestrator that holds references to AI services, plugins, and memory. Plugins are collections of functions — either native functions written in the host language or semantic functions defined as parameterized prompts — that the kernel exposes to LLMs through automatic function-calling. Planners (the older Stepwise and Handlebars planners, and the newer function-calling planner) decompose user goals into ordered plugin invocations. Memory connectors integrate with Azure AI Search, Qdrant, Weaviate, Redis, Postgres pgvector, and a long list of other vector stores for conversation history and semantic recall. Filters intercept prompt rendering and function invocation for logging, redaction, and safety policy. The Agent Framework — added more recently — wraps these primitives into ChatCompletionAgent, OpenAIAssistantAgent, and AzureAIAgent classes that expose a uniform agent abstraction.

Commercial momentum follows Microsoft's enterprise channel. Semantic Kernel agents ship in Microsoft 365 Copilot extensibility scenarios, Azure OpenAI deployments, Dynamics 365 customizations, and a growing slice of independent-software-vendor products that target Microsoft-shop customers. The SDK's value to those customers is real: it abstracts model differences (GPT-4, GPT-4o, Azure OpenAI deployments, and increasingly non-OpenAI models through connectors), it makes prompt templates and function-calling first-class, and it composes with the .NET and Python tooling enterprises already operate. The gap described here is not about the SDK's quality or fit; it is about what the SDK does and does not produce.

The Architectural Gap

A Semantic Kernel agent at runtime is an in-process composition. The kernel holds references to plugins, the plugins hold references to functions, the functions are bound to either C#/Python methods or to prompt templates, the memory connector points at a vector store, and the agent class wraps a chat-completion service. None of these references is cryptographically bound to any other. Plugins are loaded from package references, configuration files, or filesystem paths. Prompts are loaded from skprompt.txt files or string literals. Memory contents are read from a vector index that any process with credentials can mutate. The "agent" is whatever the host process happens to assemble at startup.

Cooperative loading is the operative phrase. The host application is trusted to load the right plugins, point at the right prompts, attach the right memory store, and apply the right filters. If any of those pieces is swapped — a plugin substituted with a same-shaped but differently-behaving implementation, a prompt template edited in place, a memory store repointed at a different index, a filter silently removed — the agent's behavior changes and nothing in the runtime detects the substitution. There is no canonical hash over the agent's constituent fields, no signed manifest enumerating what the agent is supposed to be, no integrity check at instantiation. Semantic Kernel agents are configurations, not objects with identity.

The consequence shows up wherever the agent's behavior matters to a party that did not write the host application. A regulator asking "what rules did this agent operate under when it produced this output?" must trust the application's logging. An auditor asking "is the agent that ran in production today the same agent that passed the safety review last month?" must trust the deployment pipeline. A downstream agent asking "can I delegate this task to that agent?" has no structural way to verify the delegate's claimed capabilities or governance. The plugin model gives developers expressive power; it does not give the agents they build a verifiable identity. Microsoft's own guidance for governed deployments leans on Azure-side controls — managed identities, Key Vault, Purview — that protect the host environment without giving the agent itself a portable schema.

Memory makes the gap concrete. Semantic Kernel's memory connectors are storage abstractions: a SemanticTextMemory writes embeddings to a vector store and reads them back by similarity. There is no provenance attached to memory entries, no lineage chain across mutations, no governance metadata distinguishing memory the agent learned in one tenant from memory shared across tenants. When two Semantic Kernel agents in the same enterprise share a vector index — a common pattern for cost reasons — they share an undifferentiated pool of vectors. The agent has no schema field that says "this is my memory, governed under this policy, originating from these sources" because the agent has no schema at all.

What the Agent-Schema Primitive Provides

The Adaptive Query agent-schema primitive defines an agent as a typed, cryptographically bound object with named fields: identity, governance, skills, memory, execution state, and lineage. Identity is a stable cryptographic anchor — a key pair plus a content hash over the agent's constituent fields — that survives instantiation, migration, and version upgrade. Governance is a typed policy reference that names the rules the agent operates under, signed by an authority recognized in the deployment context. Skills are capability descriptors bound to executable code by content hash; a skill cannot be silently substituted because substitution changes the hash and breaks the binding. Memory is a typed field with provenance, lineage, and governance metadata attached at the entry level rather than at the store level. Execution state is captured in a form a successor instance can verify before resuming. Lineage records the chain of versions, derivations, and delegations the agent has participated in.

The structural commitment is that an agent's identity is a function of its fields. Two agents with identical fields have identical identities. An agent whose governance has been swapped has a different identity. A memory store whose contents drift away from their declared provenance no longer satisfies the agent's schema and is detectable as such. Identity is no longer a string assigned by configuration; it is a hash over what the agent actually is, signed by the authority that instantiated it.

Cryptographic binding is the structural answer to cooperative loading. Where Semantic Kernel asks the host application to assemble the right pieces, the schema primitive demands that the assembly be verifiable. A consumer encountering an agent — another agent, a workflow engine, an audit tool, a regulator's inspection harness — can verify the agent's identity, validate its governance signature, confirm that its skills resolve to the hashes the manifest names, and check that its memory bears the provenance the schema declares. The verification is mechanical and offline-capable; no trusted runtime, no privileged inspection API, no host cooperation is required.

Composition Pathway With Semantic Kernel

The primitive does not replace Semantic Kernel; it wraps it. Semantic Kernel's plugin model maps cleanly to the schema's skills field: each plugin function becomes a hash-bound skill descriptor, with the descriptor naming the function's content hash, expected interface, and governance scope. Semantic Kernel's memory connectors implement the storage layer behind the schema's memory field, with provenance and lineage attached at write time by a small adapter that intercepts SemanticTextMemory writes. Semantic Kernel's filters provide a natural interception point for governance enforcement: a filter that checks every prompt rendering and function invocation against the agent's typed governance field is a thin wrapper around the existing IPromptRenderFilter and IFunctionInvocationFilter interfaces.

The integration depth that customers choose is a function of regulatory pressure. A development team that simply wants reproducibility can adopt the schema as a manifest format, generating it at build time and verifying it at startup, with no runtime change to Semantic Kernel itself. A team operating under sectoral regulation — financial-services model risk, healthcare AI governance, public-sector AI accountability — can adopt the verifying filters, the provenance-attaching memory adapter, and the lineage-recording instantiation wrapper, gaining structural compliance evidence without rewriting plugins. A team building a multi-agent product can adopt the full primitive and gain the cross-agent delegation guarantees that Semantic Kernel's plugin model alone cannot provide, because delegating agents can verify each other's identity, governance, and skills before exchanging tasks.

Azure alignment is preserved. Managed identities, Key Vault, and Purview continue to play their existing roles — the schema primitive uses them as the natural backing store for keying material, governance signatures, and provenance lineage rather than competing with them. Microsoft's investment in enterprise AI governance becomes the substrate the primitive runs on, not a parallel stack.

Commercial and Licensing Posture

Semantic Kernel's customer base — enterprise development organizations, ISVs targeting Microsoft-shop accounts, Azure-aligned system integrators — is the natural licensing audience for the agent-schema primitive. The primitive is licensed non-exclusively, with reference adapters that target Semantic Kernel's plugin, memory, and filter interfaces published under terms compatible with the SDK's MIT licensing so adopters do not face license-compatibility friction in their existing pipelines. Field-of-use scoping accommodates the regulated verticals — financial services, healthcare, public sector — where structural agent identity is a procurement requirement, while leaving unregulated deployments free to adopt the schema as a reproducibility tool without commercial overhead.

For Microsoft directly, the primitive is complementary to Azure AI Foundry's governance roadmap rather than competitive: Foundry's tenant-side controls and the schema's per-agent structural identity address different layers of the same problem. For ISVs, the primitive lets them ship Semantic-Kernel-based agents that customers in regulated industries can deploy without bespoke compliance work. For end customers, structural agent identity converts agent governance from a process artifact maintained alongside the code into a property of the agent itself, durable across redeployments and verifiable by parties who never had access to the build pipeline.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01