US Government to Get Early Access to AI Models for Security Checks
Translated from English, summarized and contextualized by DistantNews.
TLDR
- Microsoft, Google, and xAI will provide early access to new AI models for US government national security testing.
- The agreement aims to evaluate AI models before deployment and assess their capabilities and security risks.
- This initiative addresses growing concerns in Washington over the national security implications of advanced AI systems.
In a significant move to bolster national security, leading technology firms Microsoft, Google, and Elon Musk's xAI have agreed to grant the US government early access to their cutting-edge artificial intelligence models. This proactive measure, detailed by the Center for AI Standards and Innovation (CAISI) at the Department of Commerce, allows for pre-deployment evaluation and rigorous research into the security risks and capabilities of these powerful AI systems.
The agreement fulfills a commitment made under the Trump administration in July 2025, emphasizing a collaborative approach between the government and the tech industry to vet AI models for potential national security threats. Microsoft highlighted its role in working with US scientists to probe AI systems for unexpected behaviors, a process that will involve developing shared datasets and workflows for testing. This mirrors a similar agreement inked with the UK's AI Security Institute.
This development underscores the escalating concerns in Washington regarding the potential misuse of advanced AI, particularly its capacity to enhance hacking capabilities, as seen with recent discussions around Anthropic's Mythos. By securing early access to frontier models, US officials aim to preemptively identify threats, ranging from sophisticated cyberattacks to potential military applications, before these tools become widely accessible. The CAISI, serving as the government's central hub for AI model testing, has already completed over 40 evaluations, including on models not yet available to the public, often receiving versions with safety guardrails intentionally reduced for thorough security probing.
Independent, rigorous measurement science is essential to understanding โfrontier AI and its national security implications.
Originally published by Asharq Al-Awsat in English. Translated, summarized, and contextualized by our editorial team with added local perspective. Read our editorial standards.