Kubernetes Service Discovery Resolves Within Clusters. Cross-Cluster Namespace Is Central.
by Nick Clark | Published March 27, 2026
Kubernetes solved service discovery inside clusters through CoreDNS, label selectors, and the Service abstraction. Within a single cluster, any pod can resolve any service by name. But across clusters, namespace resolution depends on external control planes, federation layers, or manual DNS delegation. The boundary between clusters is where namespace governance breaks down, and resolving that gap requires governed, scope-local indexing rather than better federation tooling.
Kubernetes represents one of the most consequential infrastructure projects in the history of distributed computing. The Service abstraction, combined with CoreDNS and label-based selectors, gives every workload inside a cluster a stable, resolvable identity. This is genuine engineering. The gap described here is not a failure of Kubernetes. It is a structural constraint that every cluster-bounded system shares.
Inside the cluster, resolution works
Within a Kubernetes cluster, a pod that needs to reach another service queries CoreDNS. The query resolves through the cluster's internal namespace. Labels, selectors, and the Endpoints API ensure that service identity persists through pod restarts, scaling events, and rolling deployments. The namespace is coherent, self-consistent, and operationally reliable.
The authority for this resolution is the cluster's own control plane: the API server, etcd, and the DNS infrastructure that fronts it. Inside this boundary, namespace governance is local. The cluster governs its own names.
Across clusters, the architecture changes
When workloads span multiple clusters, the resolution model breaks. A service in cluster A that needs to reach a service in cluster B cannot resolve it through CoreDNS alone. The name does not exist in cluster A's namespace.
The solutions are well-known: Kubernetes Federation (KubeFed), multi-cluster service meshes like Istio, Admiralty, or Liqo, and external DNS services. All of these share a common structural property: they introduce a layer above the clusters that holds cross-cluster namespace authority.
This layer is necessarily central. Whether it is a federation control plane, a global service registry, or an external DNS zone, the authority for resolving names across cluster boundaries lives outside any individual cluster. The clusters participate. They do not govern.
Why this is structural
The cross-cluster namespace problem is not a tooling gap. It is architectural. Kubernetes was designed around the assumption that a cluster is a unit of administration. Within that unit, everything works. Across units, a different authority model is needed, and every available option centralizes that authority.
KubeFed propagates resource definitions from a host cluster to member clusters. The host cluster holds the authority. Multi-cluster service meshes synchronize service catalogs across clusters through a shared control plane. External DNS delegates resolution to a zone file managed outside any cluster.
In each case, the cross-cluster namespace is something a cluster receives from upstream. It does not generate it, govern it, or validate it locally. A regulatory change, a commercial decision, or a control plane failure at the federation layer propagates to every participating cluster because that layer is the single point of namespace authority.
What resolving it requires
Resolving this structurally means distributing namespace governance to the scope where it operates. Not a better federation layer. A different authority model.
In an anchor-governed index, each cluster would govern its own segment of the cross-cluster namespace through locally held policy. Resolution across clusters would traverse a hierarchy where each segment is governed by the anchors responsible for it. A service in cluster A resolving a name in cluster B would not query a central federation plane. It would traverse through anchor-governed scopes, each validating the resolution against locally held policy.
Structural changes, such as a cluster joining or leaving the namespace, a service migrating between clusters, or a namespace segment splitting under load, would be executed by the governing anchors through scoped consensus rather than propagated from a central control plane.
The federation layer does not disappear. It distributes. Each scope becomes its own control plane, governed by the nodes that hold it. A cluster under regulatory pressure in one jurisdiction does not propagate that pressure to clusters elsewhere because its anchors govern its scope and the adjacent scope governs itself.
The remaining gap
Kubernetes solved the hard problem of service identity inside a cluster. The remaining gap is in the cross-cluster namespace layer: how services find each other across administrative boundaries, how that resolution persists through structural changes, and who governs the namespace that makes it possible. That layer is still centrally defined. It is the last dependency.