Nebula Built Overlay Mesh Networks. The Certificate Authority Is Still Central.

by Nick Clark | Published March 28, 2026 | PDF

Nebula is the overlay mesh networking system that Slack built to solve its own service-to-service connectivity problem and then released as open source — a lightweight binary, a Lighthouse discovery node, optional relay nodes for NAT traversal, and a certificate-based identity model in which each participating host receives a signed certificate from a centrally-operated Nebula CA that binds the host's name, its assigned overlay IP, its group memberships, and a validity period. The data path is genuinely peer-to-peer: once two hosts have discovered each other through a Lighthouse and completed a Noise-protocol handshake mutually authenticated against the CA's public key, traffic flows directly between them over an encrypted UDP tunnel without traversing any central server. The control path, however, is not peer-to-peer at all. The certificate authority that defines who exists in the mesh, what groups they belong to, and what IP they are entitled to claim is a single signing key whose compromise voids every identity and every access decision in the network. And the access decisions themselves — the firewall rules that determine which groups may speak to which groups on which ports — are static configuration that rides on certificate metadata rather than traveling with the payload. The architectural gap is between mesh transport, which Nebula has solved well, and memory-native protocol semantics, in which the rules governing a packet's handling travel inside the packet itself rather than being looked up against a static, centrally-issued identity.


Vendor and Product Reality

Nebula was developed inside Slack between 2017 and 2019 to replace a sprawl of point-to-point VPN tunnels and bastion-host configurations with a single overlay mesh that could connect tens of thousands of hosts across multiple cloud providers, on-premise data centers, and developer laptops. Slack open-sourced the project in late 2019 under the MIT license, and it has since been adopted by Defined Networking — a commercial entity founded by Nebula's original authors — which offers a managed control plane, certificate lifecycle automation, and enterprise support on top of the open-source data plane. The codebase is written in Go, ships as a single statically linked binary, runs on Linux, macOS, Windows, FreeBSD, iOS, and Android, and is small enough to deploy on hardware as constrained as a Raspberry Pi or as ephemeral as a per-pod sidecar in a Kubernetes cluster.

The architecture has four moving parts. Each participating host runs the Nebula binary, which establishes a virtual network interface (a TUN device on Unix-like systems) and assigns it an overlay IP drawn from the certificate the host has been issued. Lighthouse nodes are well-known hosts in the mesh whose addresses are reachable from any other node and whose role is to answer "where is host X?" queries by returning the host's last-known external endpoint; they do not relay traffic by default. Relay nodes optionally sit on routable network segments and can forward traffic for hosts that are unable to establish a direct UDP path because both ends are behind symmetric NAT. The certificate authority, finally, is the offline-or-online signing key that issues and revokes the certificates the rest of the system depends on. Cryptographically, Nebula uses a Noise-protocol handshake — Curve25519 for key agreement, ChaCha20-Poly1305 for the data channel, with mutual authentication binding each side's certificate to the handshake transcript — and is often described in marketing materials as offering "mTLS-grade" peer authentication, though the on-the-wire protocol is Noise rather than TLS.

The Architectural Gap

Nebula's certificate model is the cleanest small-scope implementation of certificate-based identity in the modern mesh-VPN landscape, and that cleanness is precisely where the gap lives. Each certificate is a signed assertion that a particular Curve25519 public key corresponds to a hostname, an overlay IP, a set of groups, and an expiration. The CA's signing key is the root of every trust decision: when host A receives a handshake from a peer claiming to be host B, the only question A asks is whether B's certificate validates against the CA public key A has pinned in its configuration. If yes, B is whoever B's certificate says B is. If the CA's signing key is compromised, an attacker can mint certificates that present any name, claim any group, and bind any IP, and every host in the mesh will accept them as legitimate. The data plane is decentralized; the trust authority is a single key.

Compounding this, Nebula's access control travels not with the payload but with the identity. Each host's firewall configuration is a static list of rules of the form "allow inbound on port 443 from group=frontend" or "allow ICMP from group=monitoring." These rules are evaluated at handshake time and at packet ingress against the metadata baked into the peer's certificate. They cannot adapt to the content of the traffic, the operational context of the sender, the trust scope of the data being conveyed, or any property that emerges from the network's accumulated history rather than from the certificate's static fields. A packet carrying highly sensitive data and a packet carrying telemetry are indistinguishable to the firewall as long as both originate from a peer whose certificate carries the right group label. Revocation, which would in principle let an operator respond to a changed trust posture, is awkward in the deployed reality: Nebula supports a certificate revocation list, but distributing the list to every host in a fast-moving mesh is the same hard problem PKI has had for two decades, and the practical operational answer is short certificate lifetimes and frequent reissuance, which only deepens the dependency on the central CA.

The deeper structural observation is that Nebula has decentralized exactly the layer that was easiest to decentralize — the data path — while leaving in place the layer where centralization is most consequential: the definition of who exists, what they may claim, and what they are permitted to do. A memory-native protocol inverts this distribution. The transport layer is allowed to remain whatever the deployment finds convenient — Nebula tunnels, WireGuard tunnels, raw UDP, even a routed IP underlay — while the semantics that govern a unit of communication are carried with the unit itself rather than fetched from a static authority.

What the Memory-Native Protocol Primitive Provides

The memory-native protocol primitive treats each unit of communication as a self-describing object whose handling rules are intrinsic to it rather than asserted by an external authority. Identity, in this model, is not a certificate-borne label but a property derived from the continuity of behavior an endpoint has accumulated and the evidence of that continuity it can present at the moment of communication. Trust is not a binary check against a CA-signed credential but a graded evaluation of the relationship between the payload's stated trust scope, the sender's behavioral history, and the receiver's governance requirements. Routing is not a static lookup of overlay IP against firewall rule but a dynamic decision made by each forwarding hop on the basis of rules that travel inside the packet — rules that specify what handling the payload is entitled to, what handling it is forbidden from receiving, and what evidence must accompany any non-trivial handling decision.

Concretely, a memory-native packet carries three structural elements that a Nebula packet does not. The first is a behavioral identity proof — a compact attestation that the sending endpoint is structurally continuous with whatever historical entity the receiver has previously interacted with under the same identity. The second is a payload-bound rule set — the governance that applies to this specific unit of data, encoded in a form the receiver can evaluate without consulting any external authority. The third is a lineage trace — the chain of decisions that produced this payload from earlier payloads, allowing the receiver to audit not just the current handling request but the history of handling requests that led to it. Each of these is the kind of information that, in a Nebula deployment, exists only as static metadata in a certificate or static configuration in a firewall file, if it exists at all.

Composition Pathway with Nebula

Nebula and a memory-native protocol are not competitors. Nebula's mesh is a transport that solves NAT traversal, peer discovery, and encrypted point-to-point delivery, and it solves them well enough that there is little reason to displace it. A memory-native protocol layered above Nebula treats Nebula tunnels as one of several possible underlays and uses them for what they are good at. Hosts continue to receive Nebula certificates and continue to handshake into the mesh against the Nebula CA; that machinery is responsible for keeping the bytes flowing and the bytes private in transit. Above the Nebula tunnel, the memory-native layer wraps each application payload with the behavioral identity proof, the payload-bound rule set, and the lineage trace, and the receiving application evaluates these intrinsic properties before acting on the payload — independently of, and supplementally to, whatever the Nebula firewall has already decided about the packet at ingress.

The migration story for an existing Nebula deployment is incremental. The first step is to introduce the memory-native wrapper for a single class of high-value payloads — sensitive configuration changes, cross-organization data transfers, traffic between tenants in a multi-tenant deployment — while leaving routine traffic on Nebula's existing certificate-and-firewall model. As confidence grows, additional payload classes adopt the wrapper, and the role of the Nebula CA contracts toward what it is genuinely good at: bootstrapping an encrypted transport between hosts. The CA never disappears, because something must establish the underlay; but its compromise no longer voids the access decisions that matter, because those decisions have moved into rules that ride with each payload and into identity claims that are properties of accumulated behavior rather than of a single signed certificate.

Commercial and Licensing Posture

Nebula's commercial trajectory — open-source data plane, managed-control-plane SaaS via Defined Networking, enterprise customers seeking a Tailscale-class operational story without WireGuard's protocol assumptions — has converged on a market that is acutely aware of the limitations of static certificate authorities. The customers most likely to deploy Nebula at scale are also the customers most likely to be subject to regulatory regimes (financial-services data residency rules, healthcare access governance, cross-border transfer obligations under emerging data acts) that demand precisely the kind of payload-bound, auditable, dynamically-evaluable handling rules that certificate metadata cannot express. A licensing arrangement that lets a Nebula deployment — open-source or Defined Networking-managed — adopt a memory-native protocol layer above the existing transport addresses these regulatory pressures without forcing the customer to displace the mesh they have already operationalized.

The patent positions the primitive at the protocol layer above mesh transport, in the architectural slot that Nebula and its peers have left structurally open. Defined Networking, the principal commercial steward of Nebula, is one natural licensee. Other natural licensees are the broader ecosystem of overlay-mesh and zero-trust-network vendors — Tailscale, ZeroTier, Cloudflare's Magic WAN, the cloud-provider service-mesh offerings — each of whom faces the same architectural ceiling at the static-rules layer and the same regulatory pressure pushing them past it. The commercial proposition is a transport-agnostic layer that converts existing mesh deployments from "decentralized data path, centralized identity authority" into "decentralized data path, intrinsic per-payload governance," and that does so without asking the operator to abandon any of the engineering they have already deployed.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01