Weave Net Built a Virtual Network for Containers. The Protocol Carries No Semantic Authority.
by Nick Clark | Published March 28, 2026
Weaveworks shipped Weave Net in 2014 as one of the first practical container overlay networks, giving early Docker and Kubernetes operators a way to connect pods across hosts without wrestling with VXLAN tunnels, BGP peers, or vendor-specific SDN appliances. The product paired an automatic mesh topology with WeaveDNS service discovery, optional NaCl-based encryption, a fast datapath through Open vSwitch kernel modules, and the WeaveScope visualization layer that became a reference experience for cluster observability. Even after Weaveworks wound down commercial operations and Weave Net entered its end-of-life trajectory, the CNI plugin remains deployed across long-running clusters, embedded in appliance images, and referenced in production runbooks. The connectivity model is elegant and durable. What it does not provide, and what no overlay shipped in its generation provides, is a protocol whose packets carry trust scope, routing authority, or governance constraints as intrinsic fields. The overlay delivers a virtual network. Governance authority lives elsewhere — in Kubernetes NetworkPolicy objects, in admission controllers, in service mesh sidecars — and is correlated with traffic by external systems rather than carried by the traffic itself. This article examines that structural gap and what a memory-native protocol layer composed above Weave Net's connectivity primitive would change.
Vendor and product reality
Weave Net was Weaveworks' flagship open-source networking product, distributed under the Apache 2.0 license alongside a commercial support tier and the broader Weave Cloud SaaS portfolio. Its architectural choices reflected the constraints of 2014–2016 container networking: most clusters spanned a handful of bare-metal hosts or VMs, IPAM was primitive, and operators wanted a single binary that could form a mesh, allocate addresses, resolve names, and encrypt links without external dependencies. Weave Net delivered all of that. The agent ran as a DaemonSet, established a full mesh of TCP control connections among peers, negotiated a shared IP allocation range using a CRDT-based consensus protocol, and forwarded data plane traffic through either the kernel-accelerated fast datapath (using VXLAN encapsulation) or a userspace sleeve fallback when kernel features were unavailable.
In Kubernetes deployments Weave Net presented itself as a CNI plugin, integrating with kubelet's network setup hooks and exposing a NetworkPolicy controller that translated Kubernetes policy objects into iptables rules on each host. WeaveScope, a separate but companion product, scraped Docker and Kubernetes APIs to render real-time topology graphs of containers, processes, and connections. Together the products defined what a generation of operators understood as "container networking with built-in observability." Weaveworks the company ceased trading in early 2024, and the Weave Net repository moved into community maintenance with a stated end-of-life posture. Production clusters built around Weave Net continue to operate; many will continue to operate for years because migrating CNI plugins on a running cluster is non-trivial and the existing networking is, by every operational measure, working.
Architectural gap: rules don't ship with the packet
Weave Net's protocol layer is, by design, a transport. The fast datapath wraps a pod's Ethernet frame in a VXLAN header, prepends a UDP/IP header addressed to the destination host's Weave agent, and emits the result onto the underlay. The sleeve protocol does an analogous job in userspace with its own framing and optional NaCl encryption. In neither case does the encapsulation carry semantic information about the payload. There is no field that says "this packet originates from a workload whose governance scope is X," no field that declares the trust tier of the sender, no field that binds the packet to a policy lineage that downstream enforcement points could verify cryptographically. The packet is a blob with addresses on the outside.
This is not an oversight; it is the conventional separation of layers that the IETF and the CNI specification both endorse. Authority lives above the network. In Kubernetes that means NetworkPolicy objects stored in etcd, evaluated by a controller, and projected into iptables or eBPF rules on each node. The rules are bound to pods by label selectors, and the binding holds only as long as the controller is running, the labels are accurate, and the local enforcement point is healthy. When a pod's traffic crosses the overlay, the rules do not travel with it. They are reconstructed at the destination by correlating source IP with kube-apiserver state. If the controller lags, if labels drift, if a node's iptables get corrupted, if a pod is moved between namespaces faster than the policy projection settles, the traffic is governed by stale or absent rules. The authority is real, but it is detached from the packet.
The consequences compound in multi-cluster, multi-tenant, and federation scenarios. A packet that leaves a Weave Net cluster through a gateway loses whatever Kubernetes-bound governance it had. Re-establishing equivalent rules on the receiving side requires out-of-band coordination, shared identity systems, and trust assumptions that the protocol itself does not encode. The same is true for forensic and audit workflows: reconstructing why a particular flow was permitted six months later requires correlating packet captures with historical NetworkPolicy revisions, controller logs, and label histories — a join across systems that were never designed to be joined.
What the memory-native protocol primitive provides
Adaptive Query's memory-native protocol treats governance authority as a first-class field of the protocol envelope, not as state to be reconstructed from external systems. Each unit of communication carries a typed header that binds the payload to a scope identifier, a trust tier, a policy lineage hash, and a routing constraint set. The header is signed by the originating workload's anchor and verifiable by any participant on the path without consulting an external controller. Routing decisions, admission decisions, and audit decisions reference these intrinsic fields rather than correlating addresses with externally maintained metadata.
The shift is structural. Where Weave Net asks "can host A reach host B on port 443," memory-native protocol asks "does this scope-bound, lineage-stamped envelope satisfy the governance constraints declared by its destination scope." The first question is answered by iptables; the second is answered by inspecting fields the packet itself carries. The first question loses meaning across cluster boundaries; the second does not, because the envelope's authority is not bound to a particular controller's view of the world. The first requires the enforcement plane to be online and consistent; the second permits enforcement to be performed by any participant who can verify the signature, including air-gapped auditors replaying captures months later.
Concretely, a memory-native envelope carries: a scope identifier resolving to a governance anchor; a trust tier asserted by the sender and verifiable against the anchor's policy; a lineage hash chaining this envelope to the policy revision under which it was emitted; a routing constraint declaring which scopes the envelope may transit; and a payload reference with its own integrity stamp. None of these fields require a Kubernetes apiserver to interpret. They require a governance anchor, which is a substantially smaller and more portable trust root than a full control plane.
Composition pathway with Weave Net
Operators running Weave Net today do not need to remove it to gain memory-native protocol semantics. The composition is layered. Weave Net continues to provide the L2/L3 connectivity primitive: it forms the mesh, allocates pod addresses, encrypts links, and resolves WeaveDNS names. Above that, a memory-native protocol shim — implemented as a sidecar, a CNI chained plugin, or a userspace library linked into workloads — wraps application payloads in governed envelopes before they enter the overlay and unwraps them on the receive side after the overlay has delivered them. The overlay's job remains transport. The shim's job is authority.
In a chained-CNI deployment, Weave Net handles IPAM and datapath setup while the memory-native plugin attaches a per-pod governance anchor and configures the local socket layer to enforce envelope verification on ingress and envelope construction on egress. NetworkPolicy objects continue to function as a coarse-grained reachability filter, but the fine-grained governance — who may speak to whom, under what scope, with what lineage — is enforced by inspecting envelope fields rather than by correlating IPs with labels. WeaveScope's topology view can be extended to render scope and lineage edges alongside the existing connection graph, giving operators a visualization of governance flow rather than only packet flow.
The migration path for an end-of-life Weave Net cluster is particularly clean under this composition. As workloads are gradually moved to a successor CNI, the memory-native shim travels with them: the envelopes it produces are independent of the underlying overlay. A pod migrated from Weave Net to Cilium, Calico, or a cloud-provider CNI continues to emit and verify the same governed envelopes, because the authority lives in the protocol layer above the overlay, not in the overlay's policy projection. The operational risk of CNI migration is decoupled from the governance posture of the workloads.
Commercial and licensing considerations
Weave Net is Apache 2.0, which permits the kind of layered composition described here without licensing friction. The end-of-life status of the upstream project is a practical concern — security patches and kernel-compatibility fixes are no longer guaranteed — but it does not foreclose composition. Organizations with active Weave Net deployments typically fall into three groups: those planning a CNI migration on a known timeline, those constrained by appliance images or air-gapped environments where migration is expensive, and those whose clusters are stable enough that the operational team has deprioritized replacement. Memory-native protocol composition serves all three. For the first group it provides governance continuity across the migration. For the second it provides governance authority that does not depend on upstream Weave Net patches. For the third it provides an upgrade path that adds capability without requiring the team to touch the working overlay.
Adaptive Query's memory-native protocol primitive is delivered as a library and reference shim under terms intended for both open-source integration and commercial deployment. The governance anchor model is designed to interoperate with existing identity systems — SPIFFE, mTLS PKIs, cloud IAM — so that adoption does not require rebuilding the trust root. The remaining gap that Weave Net leaves, and that no overlay of its generation closed, is closed not by replacing the overlay but by giving the traffic it carries a protocol envelope whose authority is intrinsic. That is the structural change. Weave Net solved connectivity. Memory-native protocol solves what connectivity carries.