The Trump administration has struck voluntary AI safety agreements with Google DeepMind, Microsoft, and xAI, under which the companies will share early versions of their AI models with the US government for pre-release safety testing. The deals, which mirror agreements the Biden administration negotiated with OpenAI and Anthropic in August 2024, extend the federal government's visibility into frontier AI development to cover three additional major labs — and signal that the current administration's approach to AI governance, despite its general skepticism of regulation, includes a meaningful role for government oversight of the most powerful AI systems.

"The deals extend the federal government's visibility into frontier AI development to cover the majority of US-based labs working at the frontier."

— White House, May 2026

What the Deals Require

Under the agreements, Google DeepMind, Microsoft, and xAI will provide the US government with early access to their most capable AI models before public release, allowing government researchers to evaluate the models for potential safety risks. The agreements are voluntary and do not carry the force of law, but they create a framework for ongoing government-industry collaboration on AI safety that the administration has described as a model for responsible AI development.

The specific terms of the agreements have not been fully disclosed, but they are understood to be similar in structure to the Biden-era deals: companies provide early model access, government researchers conduct safety evaluations, and the results inform both the company's deployment decisions and the government's understanding of the risks posed by frontier AI systems. The agreements do not give the government veto power over model releases, but they create a channel for safety concerns to be raised before a model reaches the public.

A Bipartisan Safety Framework

The extension of the Biden-era safety framework to three additional labs is notable for what it reveals about the continuity of US AI policy across administrations. The Trump administration has generally been skeptical of AI regulation, rolling back several Biden-era executive orders on AI and positioning the US as a competitor to the EU's more prescriptive regulatory approach. But the pre-release review agreements represent a form of government oversight that the current administration has chosen to preserve and extend.

The Anthropic Exception

The announcement comes against the backdrop of the Pentagon's ongoing dispute with Anthropic, which has refused to sign military AI contracts and is reportedly pursuing legal action against the Defense Department over the issue. The contrast between Anthropic's posture and that of Google, Microsoft, and xAI — all of which have signed Pentagon contracts — illustrates the range of positions that major AI labs are taking on the question of military AI. For the Trump administration, securing safety agreements with three companies that have also signed military contracts may be a way of demonstrating that safety and national security applications of AI are not in conflict.