Kubernetes Service Discovery Resolves Within Clusters. Cross-Cluster Namespace Is Central.

by Nick Clark | Published March 27, 2026 | PDF

Kubernetes solved service discovery inside clusters through CoreDNS, label selectors, and the Service abstraction. Within a single cluster, any pod can resolve any service by name. But across clusters, namespace resolution depends on external control planes, federation layers, or manual DNS delegation. The boundary between clusters is where namespace governance breaks down, and resolving that gap requires governed, scope-local indexing rather than better federation tooling. The AQ adaptive-indexing primitive — the anchor-governed namespace substrate disclosed in connection with US 2026/0010525 A1 — provides the architectural shape this resolution requires.


1. Vendor and Product Reality

Kubernetes is the de facto orchestration substrate for containerized workloads across the public cloud, private data center, and edge. Originating as the open-source successor to Google's internal Borg system, donated to the Cloud Native Computing Foundation in 2015, Kubernetes has become the most widely deployed distributed-systems control plane in the history of computing, with managed offerings including Amazon EKS, Google GKE, Microsoft AKS, Red Hat OpenShift, VMware Tanzu, and Rancher's RKE distribution underwriting the majority of enterprise container deployments. The project itself is governed by the CNCF and developed in the open through a community of thousands of contributors representing every major cloud and infrastructure vendor.

Within the cluster boundary, Kubernetes provides a remarkably coherent set of primitives. The Pod is the unit of scheduling. The Service abstraction provides a stable identity that survives Pod restarts, scaling, and rolling deployments. Endpoints and EndpointSlices track which Pods currently back a Service. Labels and selectors compose dynamic membership. CoreDNS — the CNCF-graduated DNS server that replaced kube-dns in modern distributions — fronts the cluster's internal namespace, resolving service names of the form service.namespace.svc.cluster.local against the API server's view of the cluster state. The kube-proxy and the eBPF-based dataplanes (Cilium, Calico, kube-router) implement the Service forwarding semantics. Within a cluster, any workload can find any other workload by name, and that name remains stable through the operational churn of containerized environments.

Around the core, the ecosystem has elaborated this further. Service meshes — Istio, Linkerd, Consul Connect, Cilium Service Mesh — add identity, mutual TLS, traffic policy, and observability on top of the Service abstraction. The Gateway API, the successor to Ingress, formalizes the north-south traffic surface. The CSI, CNI, and CRI interfaces define storage, network, and runtime extensibility. Multi-cluster extensions — KubeFed, Karmada, Liqo, Admiralty, Submariner, and the various service-mesh multi-cluster modes — address the case where a single application spans multiple clusters across regions, clouds, or administrative domains. Within the scope each of these projects defines, the engineering is mature and the operational practice is well established.

The architectural shape of all of this is consistent: the cluster is the unit of administration. Inside the cluster, the API server is the single source of truth, etcd is the durable store, and CoreDNS is the resolution surface. Across clusters, a layer above the clusters is required to provide cross-cluster namespace authority. That layer is where the structural gap analyzed below lives.

2. The Architectural Gap

The structural property Kubernetes does not exhibit at the cross-cluster boundary is namespace governance distributed to the scope where it operates. Inside a cluster, namespace authority is local: the cluster governs its own names. Across clusters, namespace authority is necessarily upstream: a federation control plane, a global service registry, an external DNS zone, or a multi-cluster service-mesh control plane holds the authority for resolving names that span cluster boundaries, and the participating clusters receive that authority rather than generating it.

Three structural sub-gaps follow. First, KubeFed and its successors propagate resource definitions from a host cluster to member clusters. The host cluster holds the authority. If the host cluster's control plane fails, becomes partitioned, or is subject to a regulatory or commercial disruption, every member cluster's view of the cross-cluster namespace is affected. The member clusters are participants, not governors.

Second, multi-cluster service meshes — Istio's primary-remote and multi-primary topologies, Linkerd's multi-cluster mode, Cilium ClusterMesh, Consul Federation — synchronize service catalogs across clusters through a shared control plane or a federation of control planes that exchange catalog state. The exchange is necessary because each cluster's CoreDNS resolves only its own namespace; the shared catalog is the surface that makes a service in cluster A visible as a name in cluster B. The catalog is, structurally, a central artifact, even when its physical implementation is replicated, because it represents authority over a namespace that no single cluster locally governs.

Third, external DNS solutions — ExternalDNS controllers writing to Route 53, Cloud DNS, or an enterprise BIND zone — delegate resolution to a zone file managed outside any cluster. The zone is the namespace authority. The clusters publish records to it; they do not govern it. A regulatory change, a registrar dispute, a DNSSEC key rotation, or a cloud-provider control plane incident at the DNS zone level propagates to every consumer of that zone because the zone is the single point of namespace authority for the cross-cluster scope.

The gap is not a tooling shortfall. It is architectural. Kubernetes was designed under the assumption that a cluster is a unit of administration. Within that unit, the design is excellent. Across units, the design assumes a higher administrative layer will provide what the cluster does not. Every available implementation of that higher layer concentrates authority at a layer above the participating clusters, by construction, because the clusters do not have an architecturally sanctioned way to participate in cross-cluster namespace governance as peers. Kubernetes cannot patch this from inside the API server, the CRD model, or the CoreDNS plugin chain; the patch would have to introduce a peer-authority model that the project's design does not currently anticipate.

3. What the AQ Adaptive-Indexing Primitive Provides

The Adaptive Query adaptive-indexing primitive specifies an anchor-governed namespace substrate in which namespace authority is held at the scope where it operates rather than concentrated above it, and in which cross-scope resolution traverses a hierarchy of locally governed segments rather than consulting a central registry. Property one — anchor-governed scope — requires that each namespace segment be governed by a defined set of anchor nodes that hold authority over the segment under a published policy. Anchors are not a federation control plane sitting above the segment; they are members of the segment that have been credentialed to govern its namespace.

Property two — scope-local validation — requires that resolution within a segment be validated locally against the segment's own policy, without recourse to an upstream registry. A consumer resolving a name within the segment receives an answer that the segment's anchors have validated under the segment's own governance, with no architectural dependency on a higher layer.

Property three — hierarchical traversal — requires that cross-scope resolution proceed by traversing a hierarchy of anchor-governed segments, each of which validates the portion of the resolution within its scope. A consumer in one segment resolving a name in another segment traverses the path through the intervening segments; each segment's anchors validate the traversal under their local policy; no segment is required to surrender its authority to a parent layer.

Property four — scoped consensus for structural change — requires that namespace structural changes (a new segment joining, a segment splitting under load, a service migrating between segments, an anchor set rotating) be executed by the governing anchors through scoped consensus rather than propagated from a central control plane. Property five — substrate-level resilience — requires that the failure or compromise of any single anchor set affect only the segment it governs and not propagate to the rest of the namespace, because no anchor set has authority outside its segment. The five properties compose into a substrate in which the federation layer does not disappear; it distributes. Each scope becomes its own control plane, governed by the anchors that hold it, and cross-scope resolution becomes a hierarchical traversal of locally governed segments rather than a query against a centrally held registry. The primitive is technology-neutral with respect to the underlying transport, signature scheme, or storage model.

4. Composition Pathway

Kubernetes composes with the AQ adaptive-indexing primitive as a domain-specialized cluster orchestrator over the anchor-governed namespace substrate. What stays at Kubernetes: the Pod scheduling model, the Service abstraction, the Endpoints and EndpointSlices controllers, the CoreDNS resolution surface within the cluster, the kube-proxy and eBPF dataplanes, the CSI/CNI/CRI extensibility, the Operator pattern, the Helm and CRD ecosystem, the service-mesh sidecars and ambient modes, and the entire commercial relationship that the cluster operator has with its workloads, its developers, and its compliance authority. Kubernetes within the cluster boundary remains exactly what it is.

What moves to AQ as substrate: the cross-cluster namespace layer. Each cluster registers itself as a scope governed by a defined set of anchor nodes — typically a quorum drawn from the cluster's control-plane membership, the cluster operator's credentialed identity, and a configurable policy set that reflects the cluster's regulatory and commercial posture. Cross-cluster service resolution becomes a hierarchical traversal: a Service in cluster A resolving a name in cluster B traverses the anchor-governed scopes that connect them, each segment validating its portion of the resolution under local policy. The traversal is implemented as an extension to CoreDNS or as a sidecar resolver in the service-mesh dataplane; the resolution result returns to the calling Pod with the same Service-abstraction semantics it would receive within its own cluster.

Structural changes — a cluster joining or leaving the cross-cluster namespace, a workload migrating between clusters, a namespace segment splitting under load, an anchor set rotating after a credential change — are executed by the governing anchors through scoped consensus rather than propagated from a federation control plane. KubeFed, Karmada, Submariner, and the multi-cluster service-mesh modes continue to operate as the workload-replication, traffic-routing, and policy-distribution surfaces; they no longer carry the namespace-authority load, because the substrate carries it. The Gateway API and the multi-cluster Service API integrate cleanly: cross-cluster Service references resolve through the anchor-governed substrate without changes to the developer-facing manifest model.

5. Commercial and Licensing Implication

The fitting commercial arrangement is a substrate license to the cluster operators and the managed-Kubernetes vendors. Each managed-Kubernetes provider — AWS EKS, Google GKE, Azure AKS, Red Hat OpenShift, VMware Tanzu, Rancher, and the emerging sovereign-cloud and edge providers — embeds the AQ adaptive-indexing primitive into its multi-cluster offering and licenses anchor-governed namespace participation to its customers as a property of the multi-cluster service. Pricing aligns with the existing managed-Kubernetes economic model: per-cluster, per-anchor, or per-cross-scope-resolution metering, with optional premium tiers for sovereignty-sensitive deployments, regulated verticals, and cross-provider topologies.

What the managed-Kubernetes vendor gains: a structural answer to the multi-cluster federation question that has been a recurring source of architectural friction since the original KubeFed effort in 2017, a defensible position against the competing federation tools and the lock-in implications of provider-specific multi-cluster offerings, and a forward-compatible posture against the emerging sovereign-cloud and data-residency regimes — the EU's NIS2 and Data Act, India's DPDP Act, China's MLPS, and the proliferating state-level data-localization rules in the United States — that are converging on requirements for namespace authority that is not concentrated outside the regulated jurisdiction.

What the cluster operator and the application owner gain: a cross-cluster namespace that survives the failure or compromise of any single control-plane layer, portable cross-cluster service identity that does not depend on a single provider's federation control plane, and a sovereignty model in which a cluster under regulatory pressure in one jurisdiction does not propagate that pressure to clusters elsewhere because each segment governs itself. Honest framing — the AQ primitive does not replace Kubernetes. Kubernetes remains the orchestrator, the Service abstraction, the dataplane, and the developer surface. The primitive gives Kubernetes the cross-cluster namespace substrate it needs and does not currently have. CoreDNS solved resolution inside the cluster. The substrate solves authority across them.

Nick Clark Invented by Nick Clark Founding Investors:
Anonymous, Devin Wilkie
72 28 14 36 01