[Update] Kubernetes v1.35: New AI Conformance for Agentic Ops
Dillip Chowdary
March 24, 2026 • 7 min read
At KubeCon + CloudNativeCon Europe 2026, the CNCF officially unveiled Kubernetes v1.35, featuring the landmark Kubernetes AI Conformance program. This update introduces Kubernetes AI Requirements (KARs), a set of standards designed to simplify the orchestration of hardware for "agentic workflows."
The Shift to Agentic Infrastructure
The core of v1.35 is the realization that AI is no longer just about static inference. The rise of autonomous agents requires infrastructure that can dynamically reallocate GPU, NPU, and high-bandwidth memory (HBM) resources at machine speed. The new conformance program ensures that cloud providers offer a consistent interface for these intensive workloads.
Key technical additions in v1.35 include Dynamic Resource Allocation (DRA) enhancements and a new Agentic Scheduler that prioritizes low-latency "thinking" loops over throughput-heavy training tasks.
Key v1.35 Features
- Standardized Device Plugins: Unified interface for Blackwell and Rubin GPU architectures.
- Topology-Aware Scheduling: Optimizes pod placement based on NVLink and PCIe 6.0 affinity.
- AI Observability Hooks: Native metrics for LLM token latency and GPU utilization.
Bridging the Community Gap
KubeCon 2026 has shown that the cloud native community is now an AI community. With 19.9 million developers globally, the CNCF is focusing on preventing the fragmentation that plagued early container orchestration. By establishing KARs, they are providing a "golden path" for enterprises to deploy Multi-Agent Systems without vendor lock-in.
Connect with Fellow Cloud Architects
KubeCon is about the people as much as the code. Want to discuss Kubernetes v1.35 with experts in your city? Use StrangerMeetup to find local tech gatherings.
Find Tech Meetups Near You →Future Outlook: Towards Kubernetes v1.40
The roadmap for 2026 indicates a deeper integration of Model Context Protocol (MCP) directly into the Kubernetes control plane. This would allow the cluster itself to act as a reasoning engine, managing its own health and scaling through native AI agents. v1.35 is the foundational step towards this "Self-Healing AI Infrastructure."