ASUS Unveils Vera Rubin Servers for Gigawatt-Class AI
At the 2026 Tech Infrastructure Summit, ASUS pulled the curtain back on its most ambitious server lineup to date: the ESC-VR721-E3 series. These systems are among the first in the world optimized for the NVIDIA Vera Rubin NVL72 rack-scale architecture.
Scaling to Trillion-Parameter MoE Models
The Vera Rubin generation represents a massive leap in interconnect bandwidth. ASUS has engineered the ESC-VR721-E3 to handle the immense thermal and power requirements of the NVL72 platform, which can scale to support trillion-parameter Mixture-of-Experts (MoE) models with unprecedented efficiency.
The server features a proprietary liquid-cooling distribution manifold that ensures consistent performance across all 72 GPUs in the rack. By minimizing thermal throttling, ASUS claims a 15% increase in sustained compute density compared to standard liquid-cooled designs.
Ready for Gigawatt-Class Data Centers
As big tech companies begin planning "Gigawatt-class" AI factories, ASUS is positioning itself as the hardware partner of choice for modular scaling. The new servers include integrated AI-driven power management that can shift loads in real-time based on rack-level power availability, a critical feature for centers operating on renewable-heavy grids.
The launch of the ASUS Vera Rubin lineup signals that the era of "Super-Inference" is here, where the bottleneck is no longer the silicon itself, but the ability to feed and cool it at a massive scale.