US government agency to safety test frontier AI models before release
Summary
The US government's Center for AI Standards and Innovation (CAISI, a division of the Department of Commerce) has signed agreements with Google DeepMind, Microsoft, and xAI to test advanced AI models before they are released publicly. This represents a shift toward proactive security testing, where the government evaluates frontier AI (cutting-edge AI systems with new capabilities) for safety risks and provides feedback on improvements before deployment, joining similar agreements already in place with Anthropic and OpenAI.
Classification
Affected Vendors
Related Issues
Original source: https://www.csoonline.com/article/4168135/us-government-agency-to-safety-test-frontier-ai-models-before-release-2.html
First tracked: May 7, 2026 at 02:00 AM
Classified by LLM (prompt v3) · confidence: 92%