Consul's Service Catalog Is Brilliant Infrastructure. It Is Still a Central Registry.

by Nick Clark | Published March 27, 2026 | PDF

Consul provides consistent, Raft-replicated service discovery across multi-datacenter topologies, but service discovery and namespace governance are distinct problems. This article examines why Consul's central registry model, designed for health-checked service lookup, cannot support cross-organization federation, scoped namespace policy, or local structural adaptation. The wall that service meshes keep hitting is not a discovery problem but a governance problem, and resolving it requires distributing namespace authority rather than replicating a central catalog.


Consul solved a real and hard problem. In a dynamic infrastructure where service instances appear and disappear continuously, where IP addresses are ephemeral and health states change constantly, you need a system that knows what is running, where it is, and whether it is healthy. Consul built that system: a distributed service catalog that agents replicate using Raft consensus, a DNS interface that routes traffic to healthy instances, and a health checking mechanism that keeps the catalog current in real time.

That is not a trivial engineering achievement. The consistency guarantees Consul provides, combined with its support for multiple runtimes and multi-datacenter topologies, made it the foundational service discovery layer for a significant fraction of production microservice infrastructure.

The structural problem is not that Consul is poorly designed. It is that Consul is a central registry — by design, deliberately, correctly for its stated purpose — and a central registry has properties that become constraints as distributed systems scale beyond single datacenter topologies toward genuinely federated infrastructure.

What the service catalog is and how it governs

Consul's documentation describes the control plane as maintaining "a central registry that keeps track of all services and their respective IP addresses." The service catalog is described explicitly as "a single source of truth that allows your services to query and communicate with each other."

That single source of truth lives in Consul server agents, which maintain consistency through the Raft consensus protocol. When a service registers, the catalog records it. When a health check fails, the catalog updates. When a service is deregistered, the catalog reflects it. The catalog is always consistent because it is always authoritative: the server agents hold the state, and client agents query them.

Service policy follows the same pattern. Service intentions, configuration entries, and ACL policies are defined at the control plane level and propagated to the data plane. The sidecar proxies and gateway proxies that handle actual traffic are configured by the control plane. They execute policy; they do not hold it.

In a WAN-federated multi-datacenter deployment, Consul requires designating a primary datacenter that contains authoritative information about all datacenters, including service mesh configurations and ACL resources. Secondary datacenters replicate from the primary. If the primary is unavailable, its resources are unavailable to secondary datacenters that depend on them. The distribution is geographic. The authority is central.

Where this becomes a structural constraint

For a single organization managing services across known datacenters under unified governance, Consul's model is the right design. The single source of truth is the feature. Operations teams need to know that the catalog reflects reality, that policy is consistent, that health state is accurate. Raft consensus provides those guarantees precisely because the catalog is centrally authoritative.

The constraint appears at three boundaries.

Namespace governance at service boundaries. In Consul, the namespace for services within a datacenter is governed by the Consul server agents for that datacenter. Two independently operated Consul deployments cannot federate their namespaces without one becoming subordinate to the other or without a separate layer of external coordination. There is no mechanism for a service scope to hold its own namespace policy that other scopes can resolve without requiring the catalog to be authoritative over both.

Structural adaptation without catalog involvement. When a service scope grows — when a namespace needs to split, when a new organizational boundary emerges, when traffic patterns require restructuring how services are grouped and discovered — those changes happen through the catalog. The catalog is the source of truth about what exists. A service scope cannot propose its own structural change and have it evaluated locally. The change goes through the catalog, which means it goes through the control plane.

Cross-organization federation. The scenario Consul is increasingly being asked to support — two organizations with independent Consul deployments that need their services to be discoverable to each other without one subordinating its namespace to the other — has no clean solution within the current architecture. Cluster peering is the closest available mechanism, but it still requires explicit configuration at the control plane level of both clusters and does not support local namespace governance across the boundary.

The difference between service discovery and namespace governance

These are two distinct problems that Consul addresses with the same mechanism, which is why the structural limit only becomes visible at scale.

Service discovery is: given a service name, find healthy instances. The catalog solves this correctly. The answer needs to be consistent and current. Central authority under Raft consensus is the right model.

Namespace governance is: given a region of the service namespace, who holds the policy for how that region evolves, what names mean within it, how it can be restructured, and how changes to it are validated and recorded. This is a different problem. It does not require global consistency. It requires that the nodes responsible for a scope hold the policy for that scope, and that mutations to the scope be validated by those nodes through local consensus.

An adaptive, anchor-governed index provides the second capability without displacing the first. The service catalog continues to handle service discovery: registration, health checking, DNS resolution. The adaptive index layer governs the namespace: how scopes are defined, how they can evolve, what policy applies within each scope, how mutations are proposed and validated, and how the history of structural changes is preserved. The control plane does not disappear. It distributes into the namespace itself, with each scope governed locally by the anchors responsible for it.

The practical consequence is concrete. Two independently operated service meshes, each with their own Consul deployment, can federate at the namespace layer without either catalog becoming authoritative over the other. Each scope resolves its own segment of the namespace. Cross-scope resolution traverses the hierarchy through alias delegation, not through a shared catalog. Service policy at each scope boundary is held by the anchor nodes governing that scope, not defined in a central configuration and propagated outward.

Nick Clark Invented by Nick Clark Founding Investors: Devin Wilkie