Vertical Power Delivery: How Epic Microsystems is Solving the 100kW+ AI Rack Bottleneck

As the AI industry races toward megawatt-class data centers, the fundamental limitation is no longer just compute silicon—it is power. On March 25, 2026, Epic Microsystems emerged from stealth with a $21 million Series A funding round led by Seligman Ventures and Intel Capital. Their mission is to commercialize a vertical power delivery architecture that addresses the extreme energy demands of 100kW+ AI racks.

Traditional DC-DC conversion methods are reaching their physical limits. As GPUs like NVIDIA's Blackwell draw over 1,000 watts each, the power delivery network (PDN) on the board must handle hundreds of amps. Epic Microsystems' approach promises to eliminate the "thermal wall" by reimagining the geometry of how electricity reaches the processor.

The Failure of Lateral Power Delivery

Most modern servers use lateral power delivery, where voltage regulator modules (VRMs) are placed around the perimeter of the CPU or GPU. This requires electricity to travel across several inches of copper traces to reach the core. At the high currents required by AI accelerators, this "path resistance" leads to significant voltage drops and heat generation via I²R losses.

Furthermore, lateral placement consumes valuable PCB real estate, forcing a trade-off between power phases and HBM (High Bandwidth Memory) routing. As processors grow in size and pin count, the "power shadow" cast by these perimeter components limits the maximum achievable compute density. Epic Microsystems argues that the only way forward is to go vertical.

By moving the power stages directly beneath the processor, vertical power delivery reduces the transmission distance from inches to millimeters. This minimizes parasitic inductance and resistance, allowing for more stable transient response. This stability is critical for AI workloads that exhibit rapid, high-magnitude swings in power consumption.

Hybrid Switched-Capacitor (HSC) Architecture

At the heart of Epic’s innovation is the Hybrid Switched-Capacitor (HSC) architecture. Unlike traditional buck converters that rely on bulky magnetic inductors for energy storage, HSC uses a network of high-performance capacitors and optimized silicon. This inductor-free design is the key to achieving the extreme power density required for 100kW racks.

Inductors are the "height bottleneck" in power design. They are large, heavy, and difficult to cool. By replacing them with switched capacitors, Epic has significantly reduced the z-height of the power stages. This allows the power delivery unit to be thin enough to fit between the processor and the baseboard or directly onto the interposer.

The HSC design also offers superior thermal efficiency. Because it avoids the core losses inherent in magnetic inductors, it generates significantly less heat. In a high-density rack where cooling capacity is the primary constraint, every watt of conversion loss saved is a watt that can be used for actual inference or training compute.

Epic claims that their HSC modules can achieve over 95% efficiency in 48V-to-1V conversion at the point of load. This compares favorably to the 88-91% seen in many legacy multi-phase buck systems. For a 100kW rack, this 4% improvement translates to 4,000 watts of heat that no longer needs to be removed by the liquid cooling system.

Visualizing the complex thermodynamics of a megawatt AI factory requires high-fidelity storytelling. While Epic Microsystems redefines the hardware stack, you can redefine your content strategy with our AI Video Generator. Turn your technical benchmarks into stunning 4K cinematic visuals that bring your most complex engineering concepts to life.

Solving the 100kW to 1MW Rack Transition

The industry is currently moving from 20kW racks to 120kW designs like the NVL72. However, the roadmap for 2027 and beyond points toward 250kW to 1MW per rack. At these levels, traditional busbar and power shelf designs become physically unmanageable due to the sheer volume of copper required.

Epic Microsystems’ vertical architecture is designed for this transition. By optimizing point-of-load (PoL) delivery, they enable the use of higher-voltage distribution (e.g., 400V DC) further into the rack. This reduces the current in the main rack distribution, allowing for thinner cables and more efficient liquid cooling manifolds.

The compact footprint of the HSC modules also enables rack densification. Hyperscalers can pack more GPUs into the same physical volume without hitting the "power wall." This improves the performance per square foot of the data center, which is a critical metric as urban power grids become increasingly constrained.

Strategic backing from Intel Capital suggests that this technology may be integrated into future Falcon Shores or Gaudi platforms. By vertically integrating power delivery with the compute silicon, chipmakers can offer "all-in-one" packages that simplify the system-level design for server OEMs and cloud providers.

The Role of GaN and SiC

While Epic's primary innovation is the HSC topology, they are also leveraging advancements in Wide Bandgap (WBG) semiconductors. Specifically, the use of Gallium Nitride (GaN) and Silicon Carbide (SiC) allows for much higher switching frequencies than traditional silicon MOSFETs.

Higher switching frequencies allow for even smaller passive components (capacitors). This creates a virtuous cycle of miniaturization and efficiency. GaN devices also have lower gate charge and output capacitance, which reduces switching losses—a major contributor to heat in high-current AI power stages.

Epic’s control logic is optimized to manage these high-speed transitions. The system uses a digital-twin based monitoring loop that adjusts the switching frequency and capacitor phase-shifting in real-time to match the instantaneous GPU load. This ensures maximum efficiency even during the idle periods between large model training batches.

Conclusion: The Next Frontier of AI Scaling

The success of **Epic Microsystems** highlights a broader trend: the "boring" parts of the data center—power, cooling, and interconnects—are becoming the most critical innovation hubs. As we approach the limits of Moore's Law, gains in AI performance will increasingly come from system-level optimization.

Vertical power delivery is not just an incremental improvement; it is a necessary paradigm shift. By moving the power stage directly beneath the silicon, Epic is clearing the path for the next generation of trillion-parameter models. With a commercial rollout targeted for late 2027, the race to power the 1MW rack is officially on.