White House Drafts FDA-Style Vetting for AI Models
On May 8, 2026, the White House leaked a draft of a new Executive Order that could fundamentally change how AI companies develop and release "frontier" models. The order proposes a mandatory vetting process similar to FDA clinical trials for software that meets specific compute or capability thresholds.
Mandatory Safety Audits
Under the proposed framework, developers of frontier models (those exceeding 10^27 FLOPs of training compute) would be required to submit their models for independent safety audits before they can be deployed to the general public. These audits would focus on risks related to biological synthesis, autonomous cyber-offense, and Large-Scale Influence Operations (LSIO).
The "FDA-style" approach implies that models won't just need to be safe; they will need to be proven safe through rigorous, standardized testing. This moves away from the voluntary commitments that have characterized AI safety for the past three years.
Industry and Innovation Impact
While the administration argues this is necessary for national security, critics in the tech industry warn that it could stifle innovation and cede leadership to international rivals who may not enforce similar constraints. Open-source advocates are particularly concerned that "FDA-style" vetting would be prohibitively expensive for decentralized projects, effectively creating a regulatory moat for the largest incumbents.
The White House is expected to finalize the order by late summer, setting the stage for a major legal and political battle over the future of sovereign AI regulation.