Google, Microsoft, xAI agree to government review of new AI models
Google DeepMind, Microsoft, and xAI have agreed to allow the US government to review new AI models before public release, expanding a program run by the Commerce Department's Center for AI Standards and Innovation (CAISI). The center, which began evaluating models from OpenAI and Anthropic in 2024, has completed 40 reviews to date. Both OpenAI and Anthropic have renegotiated their partnerships with CAISI to align with current administration priorities. The move represents a formalization of pre-deployment government oversight for frontier AI systems.
TL;DR
- →Google DeepMind, Microsoft, and xAI now participate in pre-deployment government review of new AI models
- →CAISI has completed 40 model evaluations since starting with OpenAI and Anthropic in 2024
- →OpenAI and Anthropic renegotiated existing partnerships to align with current policy priorities
- →Program focuses on assessing frontier AI capabilities through targeted research and evaluations
Why it matters
This expansion signals a shift toward formalized government oversight of frontier AI development before models reach the public. The program creates a structured checkpoint in the AI release cycle, giving regulators visibility into capabilities and potential risks at major labs. It reflects growing consensus among leading AI companies that some form of pre-deployment review is acceptable or inevitable.
Business relevance
For AI companies, participating in pre-deployment review becomes a de facto standard practice rather than a voluntary initiative. This could affect product roadmaps and release timelines, as companies must now plan for government evaluation cycles. For downstream users and enterprises, the program may provide some assurance about frontier model safety, though the specifics of what CAISI evaluates and how findings are used remain unclear.
Key implications
- →Pre-deployment government review is becoming normalized practice among leading AI labs, potentially setting expectations for future entrants
- →The program's scope and criteria will likely influence how companies design, test, and stage releases of frontier models
- →Transparency about CAISI's evaluation methods and findings will be critical to determining whether this is meaningful oversight or procedural theater
What to watch
Monitor whether CAISI publishes detailed evaluation criteria and findings, as this will signal how substantive the review process is. Watch for any delays or rejections of model releases, which would indicate the program has real enforcement power. Also track whether other major AI labs (Meta, Mistral, others) join the program, as this will show whether it becomes a true industry standard.
vff Briefing
Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.
No spam. Unsubscribe any time.



