Gateway API v1.3 [2026] Multi-Cluster Traffic Cheat Sheet
Bottom Line
Treat Gateway API v1.3 as the stable routing layer and Multi-Cluster Services as the endpoint distribution layer. For multi-cluster traffic, the key decision is whether you want cluster-local backends with Service or ClusterSet-wide backends with ServiceImport.
Key Takeaways
- ›v1.3.0 was released on April 24, 2025 and works on Kubernetes 1.26+.
- ›Percentage-based request mirroring is the main new Standard-channel feature in v1.3.
- ›ServiceImport backends enable ClusterSet-wide traffic, but the Gateway API interaction remains Experimental.
- ›Use Service for local endpoints and ServiceImport for cross-cluster endpoints; do not mix them accidentally.
- ›As of the official v1.3 announcement, 4 implementations were conformant for experimental-channel features.
The important facts for Gateway API v1.3 are concrete: v1.3.0 was released on April 24, 2025, it runs on Kubernetes 1.26+, and it adds percentage-based request mirroring to the Standard channel. For multi-cluster traffic, the practical pattern is to keep routing logic in Gateway and HTTPRoute, then choose whether backends stay cluster-local with Service or expand cluster-wide with ServiceImport.
- v1.3.0 shipped on April 24, 2025.
- Kubernetes 1.26+ is enough to run the API bundle.
- RequestMirror can mirror by percent or fraction.
- ServiceImport means ClusterSet-wide endpoints; Service means local endpoints.
- Experimental kinds introduced in v1.3 moved under gateway.networking.x-k8s.io.
What v1.3 Shipped
Bottom Line
Gateway API v1.3 is stable enough for production routing, but multi-cluster traffic still depends on whether your controller supports the Experimental interaction with Multi-Cluster Services. Design around controller conformance first, then write route YAML.
Confirmed facts from the official release
- v1.3.0 became generally available on April 24, 2025.
- The main new Standard-channel feature is percentage-based request mirroring.
- The new experimental features called out in the official announcement were CORS filters, XListenerSet, and retry budgets via XBackendTrafficPolicy.
- As of the official announcement, 4 implementations were already conformant for v1.3 experimental-channel features.
Resource and channel map
| Resource or feature | Status in v1.3 | Why it matters for multi-cluster traffic |
|---|---|---|
| Gateway | Standard | Owns listeners, addresses, and attachment policy. |
| HTTPRoute | Standard | Primary L7 routing object for east-west and north-south flows. |
| GRPCRoute | Standard | Useful when you want gRPC-native routing instead of generic HTTP matching. |
| ReferenceGrant | Standard | Required for safe cross-namespace backend references. |
| ServiceImport as backend | Experimental interaction | Lets a route target ClusterSet-wide endpoints instead of local ones. |
| XListenerSet | Experimental | Relevant only if your platform delegates listener ownership across teams. |
| XBackendTrafficPolicy | Experimental | Relevant when retry budgets affect blast radius across clusters. |
Searchable Command Reference
Tip: press / to focus search, Esc to clear, j/k to jump between sections, and g to jump to the top.
| Shortcut | Action | Use it for |
|---|---|---|
/ | Focus the command filter | Fast search without touching the mouse |
Esc | Clear the filter | Reset the full command list |
j | Jump to next H2 | Scan reference sections quickly |
k | Jump to previous H2 | Backtrack while debugging |
g | Scroll to top | Return to intro and takeaways |
Install the standard v1.3 bundle
kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yamlInstall the experimental v1.3 bundle
kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/experimental-install.yamlVerify what the cluster now serves
kubectl get crd | grep gateway.networking
kubectl api-resources --api-group=gateway.networking.k8s.io
kubectl api-resources | grep multiclusterInventory the routing objects
kubectl get gatewayclass,gateway,httproute,grpcroute,referencegrant -AInspect status and attachment health
kubectl get gateway edge -n infra -o yaml
kubectl describe httproute web -n app
kubectl wait --for=condition=Programmed gateways.gateway.networking.k8s.io/edge -n infra --timeout=120sCheck listener-level attachment counts
kubectl get gateway edge -n infra \
-o jsonpath='{range .status.listeners[*]}{.name}{"\t"}{.attachedRoutes}{"\n"}{end}'Check Multi-Cluster Services objects
kubectl get serviceexport,serviceimport -A
kubectl get endpointslice -A | grep clustersetSee whether a route was actually accepted
kubectl get httproute web -n app \
-o jsonpath='{range .status.parents[*]}{.parentRef.name}{"\t"}{range .conditions[*]}{.type}{"="}{.status}{" "}{end}{"\n"}{end}'Core Config Patterns
1. Smallest useful Gateway plus HTTPRoute
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: edge
namespace: infra
spec:
gatewayClassName: example
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: Same
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: web
namespace: infra
spec:
parentRefs:
- name: edge
sectionName: http
hostnames:
- app.example.com
rules:
- backendRefs:
- name: web
port: 80802. Cross-namespace backend access with ReferenceGrant
If application teams own routes and platform teams own backends in another namespace, you need ReferenceGrant. Without it, attachment may look valid at a glance but fail in status.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: store
namespace: app
spec:
parentRefs:
- name: edge
namespace: infra
rules:
- matches:
- path:
type: PathPrefix
value: /store
backendRefs:
- name: store
namespace: backend
port: 8080
---
apiVersion: gateway.networking.k8s.io/v1
kind: ReferenceGrant
metadata:
name: allow-store
namespace: backend
spec:
from:
- group: gateway.networking.k8s.io
kind: HTTPRoute
namespace: app
to:
- group: core
kind: Service3. Keep manifests readable
- Prefer one listener per intent: public HTTP, public HTTPS, internal gRPC, and mesh-adjacent routes should not be mixed casually.
- Use explicit sectionName in parentRefs so route attachment stays readable after listener growth.
- Normalize YAML before reviews; a quick pass through the Code Formatter keeps policy and route diffs tighter.
Multi-Cluster Traffic Patterns
The mental model
- Service means endpoints from the local cluster only.
- ServiceImport means endpoints from across the ClusterSet.
- ServiceExport is how a cluster contributes a local service into the ClusterSet-wide view.
- DNS expectations also change: local services resolve under cluster.local, while multi-cluster services resolve under clusterset.local.
Export a service into the ClusterSet
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
name: store
namespace: appRoute to a ClusterSet-wide backend with ServiceImport
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: store-global
namespace: app
spec:
parentRefs:
- name: edge
namespace: infra
rules:
- matches:
- path:
type: PathPrefix
value: /store
backendRefs:
- group: multicluster.x-k8s.io
kind: ServiceImport
name: store
port: 8080Blend local and global traffic during rollout
This pattern is one of the most useful operational tricks in the official GEP-1748 model: keep most traffic local while bleeding a small slice to a ClusterSet-wide backend.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: store-hybrid
namespace: app
spec:
parentRefs:
- name: edge
namespace: infra
rules:
- backendRefs:
- kind: Service
name: store
port: 8080
weight: 90
- group: multicluster.x-k8s.io
kind: ServiceImport
name: store-global
port: 8080
weight: 10Route by region or cluster slice
- Create separate ServiceImport objects such as store-west and store-east when geography matters.
- Use path or hostname matching in HTTPRoute to steer traffic to those imported services.
- Keep the fallback rule pointed at the broad store import so unmatched requests still land somewhere predictable.
Advanced Usage and Debugging
Use RequestMirror for low-risk cross-cluster validation
The official v1.3 release added partial mirroring to the Standard channel. That matters for multi-cluster work because you can validate a remote backend without sending live responses from it.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: store-mirror
namespace: app
spec:
parentRefs:
- name: edge
namespace: infra
hostnames:
- store.example.com
rules:
- backendRefs:
- name: store
port: 8080
filters:
- type: RequestMirror
requestMirror:
backendRef:
group: multicluster.x-k8s.io
kind: ServiceImport
name: store-canary
port: 8080
percent: 10What to check when traffic does not move
- Confirm the GatewayClass controller is actually installed and reconciles your chosen class name.
- Check Gateway and HTTPRoute status conditions for Accepted, Programmed, and ResolvedRefs.
- Verify the target ServiceImport exists in the same namespace the route expects.
- Make sure your controller supports the exact experimental interaction you are using; conformance on core routing is not the same as conformance on ServiceImport backends.
- Inspect EndpointSlice objects when traffic is black-holing; imported services still need healthy exported endpoints behind them.
Implementation-minded checklist
- Default to the Standard bundle unless you can name the experimental feature you need.
- Keep multi-cluster rollout logic in routes, not in duplicated listener sprawl.
- Prefer weighted migration over cutover when moving from local Service to ServiceImport.
- Document which namespaces may create routes, which may create grants, and which own gateways.
- Treat controller conformance tables as part of design input, not marketing material.
Frequently Asked Questions
How do I install Gateway API v1.3 CRDs for Kubernetes? +
kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml for the stable bundle. If you need experimental resources or fields, install experimental-install.yaml instead and confirm your controller supports those features.Can Gateway API v1.3 route traffic to multiple Kubernetes clusters? +
ServiceImport from the Multi-Cluster Services API, and that interaction is still Experimental in the official Gateway API design.What is the difference between Service and ServiceImport in Gateway API routes? +
Service backend targets endpoints from the local cluster only. A ServiceImport backend targets endpoints aggregated across the ClusterSet, so it is the right choice when traffic should survive or span cluster boundaries.What changed in Gateway API v1.3 that matters most for production traffic? +
percent or fraction, which is far safer than mirroring every request during canary and multi-cluster validation.Do I need ReferenceGrant for multi-cluster Gateway API routing? +
Get Engineering Deep-Dives in Your Inbox
Weekly breakdowns of architecture, security, and developer tooling — no fluff.
Related Deep-Dives
Kubernetes Gateway API vs Ingress in 2026
A practical decision guide for teams moving from annotation-heavy Ingress setups to role-oriented Gateway resources.
Developer ReferenceHTTPRoute and ReferenceGrant Patterns
A focused guide to safe cross-namespace routing and the status conditions that usually break first.
System ArchitectureMulti-Cluster Services API Explained
The cleanest mental model for ServiceExport, ServiceImport, ClusterSet DNS, and endpoint aggregation.