Security May 12, 2026

CAISI Pacts with DeepMind, Microsoft, & xAI for Model Review

Author

Dillip Chowdary

Founder & AI Researcher

The **Center for AI Standards and Innovation (CAISI)**, the newly formed US regulatory body for artificial intelligence, has signed historic "pre-release review" agreements with **Google DeepMind**, **Microsoft**, and **xAI**. These deals establish a formal protocol for government red-teaming of frontier AI models before they are made available to the public, a direct response to the recent capabilities demonstrated by Anthropic’s Mythos.

The 30-Day Vetting Window

Under the terms of the agreement, the participating labs will grant CAISI’s specialized security units access to their upcoming flagship models (such as DeepMind’s **Gemini 4** and xAI’s **Grok 4**) at least 30 days prior to their planned launch. During this period, federal researchers will evaluate the models for "dual-use" risks, specifically focusing on autonomous cyberattack capabilities, chemical/biological weapon design instructions, and potential for disrupting critical financial systems.

Formal Verification & Guardrail Audits

Unlike previous voluntary commitments, these pacts include requirements for **formal verification** of safety guardrails. Labs must provide documentation showing the mathematical proofs that their models cannot be induced to generate high-risk content via sophisticated prompt injection or jailbreaking techniques. CAISI has been granted the authority to request "mitigation adjustments" if a model fails these tests, effectively creating a sovereign-level kill switch for unsafe AI capabilities.

Geopolitical Resilience

The move is also a strategic play to maintain US leadership in "safe" AI. By creating a standardized, high-bar vetting process on US soil, the government aims to establish a global benchmark for AI safety that other democratic nations can adopt. This helps prevent a "race to the bottom" where labs might sacrifice safety for launch speed in a competitive global market.

As AI transitions from software to infrastructure, the era of unvetted frontier models is over. The CAISI pacts mark the formalization of the "Trust but Verify" era in the AI industry.

🚀 Tech News Delivered