Three stories broke on May 7, 2026 that, taken together, reveal exactly how the US government and Big Tech are repositioning around artificial intelligence — not as a technology to be cautiously regulated, but as a strategic asset to be controlled, deployed, and powered at almost any cost.

The Trump administration is studying an executive order that would require AI models to pass security vetting before public release. The Pentagon has handed a $500 million contract to Scale AI, backed by Meta. And Microsoft — the company that pledged to be carbon negative by 2030 — is quietly reconsidering its most ambitious clean energy commitments, overwhelmed by the power demands of its AI data centres.

None of these stories is entirely surprising. But the convergence of all three in a single news cycle signals something important: the AI era has arrived at a point where governments and corporations are making hard choices, and the trade-offs are becoming visible.

The White House is studying an executive order that would require AI models to pass security vetting before public release — a significant shift in the administration's approach to AI governance.
The White House is studying an executive order that would require AI models to pass security vetting before public release — a significant shift in the administration's approach to AI governance.

The White House Wants an FDA for AI

Kevin Hassett, Director of the National Economic Council, confirmed that the administration is actively studying an executive order that would mandate security vetting for new AI models before they reach the public. The analogy being used internally is the FDA drug approval process: before a model ships, it must be proven safe.

The trigger is Anthropic's "Mythos" model, which demonstrated a disturbing ability to rapidly identify and exploit decades-old software vulnerabilities. That capability, once it became known inside the administration, accelerated discussions that had been moving slowly for months.

"We need a clear roadmap to ensure AI models are proven safe before deployment. The stakes are too high to treat this as optional."

— Kevin Hassett, Director, National Economic Council

The Commerce Department's Center for AI Standards and Innovation (CAISI) has already begun conducting pre-deployment evaluations, signing agreements with Google DeepMind, Microsoft, and xAI — in addition to existing partnerships with Anthropic and OpenAI. A formal executive order would give those evaluations legal weight and extend them to any lab releasing a frontier model in the US.

The proposal is not without opposition. Several AI companies have privately expressed concern that mandatory pre-release vetting could slow deployment timelines and create an uneven playing field favouring incumbents who have existing government relationships. The administration has not yet committed to a timeline for the order.

Pentagon's $500 Million Bet on Scale AI

The Pentagon has awarded a $500 million contract to Scale AI to deploy advanced AI capabilities across the Defense Department's classified networks.
The Pentagon has awarded a $500 million contract to Scale AI to deploy advanced AI capabilities across the Defense Department's classified networks.

The Defense Department awarded a $500 million contract to Scale AI — the data labelling and AI infrastructure company backed by Meta — to integrate advanced AI capabilities into the Pentagon's classified networks. The deal is part of a broader push to modernise military operations with AI, and Scale AI joins Nvidia, Microsoft, Reflection AI, and Amazon as Pentagon AI partners.

Scale AI, founded by Alexandr Wang, has been aggressively pursuing government contracts over the past two years. The company's core business — curating and labelling training data — makes it a natural fit for defence applications where data quality and security classification are paramount.

The contract has drawn congressional scrutiny. Several members of the Armed Services Committee have raised concerns about the ethical implications of deploying large language models in classified environments, particularly around surveillance, targeting assistance, and the potential for AI-generated intelligence assessments to influence military decisions without adequate human oversight.

The Anthropic Holdout

Notably absent from the Pentagon's growing list of AI partners is Anthropic, which has refused to sign military contracts and is reportedly pursuing legal action against the Defense Department over the issue. The contrast is stark: while Scale AI, Microsoft, and Nvidia are deepening their ties with the military, Anthropic is drawing a line. How that posture affects the company's long-term position — commercially and politically — remains to be seen.

Microsoft's Climate Retreat

Microsoft's AI data centres are consuming power at a scale that is forcing the company to reconsider its "100/100/0" clean energy commitment — a pledge made before the full scale of the AI era was understood.
Microsoft's AI data centres are consuming power at a scale that is forcing the company to reconsider its "100/100/0" clean energy commitment — a pledge made before the full scale of the AI era was understood.

Microsoft is reconsidering its "100/100/0" clean energy target — the pledge to source 100% of its electricity, 100% of the time, from zero-carbon sources. The reason is straightforward: the company's AI data centres are consuming power at a scale that was not anticipated when the commitment was made.

Training and running large AI models requires massive, continuously operating data centres. The energy demands are not just large — they are constant. Unlike a factory that can be powered by intermittent solar or wind, an AI data centre needs reliable baseload power around the clock. That requirement is pushing Microsoft, and potentially other hyperscalers, toward energy sources that do not fit neatly into their existing sustainability frameworks.

"The commitments were made before the full scale of the AI era was understood. The maths has changed."

— Microsoft internal review, reported May 2026

The situation is not unique to Microsoft. Amazon, Google, and Meta are all facing versions of the same problem. But Microsoft's 100/100/0 pledge was among the most specific and ambitious in the industry, which makes its reconsideration particularly visible. The company has not made a public announcement, but internal discussions about revising the target have been confirmed by multiple sources.

A Broader Industry Reckoning

The energy question is becoming one of the defining tensions of the AI era. The same companies that have positioned themselves as leaders on climate are now the companies most responsible for the surge in data centre power consumption. Reconciling those two identities — without abandoning either — is a challenge that the industry has not yet solved.

On the Margins: AI Compresses 160 Years of Aging Research

Amid the policy turbulence, a quieter story deserves attention. Researchers have used AI to compress what would have been 160 years of aging research into a few years, efficiently screening billions of molecules to identify potential longevity candidates. The work opens new avenues for radical advancements in human health and life extension — and raises its own set of questions about what happens to society when the biology of aging becomes tractable.

Meanwhile, Arm Holdings has forecasted higher-than-expected revenue, driven primarily by surging demand for AI data centre chips. The company's results are a reminder that the AI boom is not just a software story — the hardware layer is expanding rapidly, and the companies that design the chips powering AI are among the biggest beneficiaries.

What to Watch

The executive order on AI security vetting, if it materialises, will be the most significant US AI governance action since the Biden administration's October 2023 executive order — which the Trump administration largely rolled back. Watch for the final scope: whether it applies only to frontier models, or to all commercially released AI systems, will determine its practical impact.

On the Pentagon side, the congressional hearings on AI in classified environments are likely to intensify. The combination of Scale AI's contract and the broader list of military AI partners means that the question of AI ethics in defence is no longer theoretical — it is a live procurement issue with $500 million already committed.

And on climate: if Microsoft formally revises its clean energy pledge, expect other hyperscalers to follow. The industry has been waiting for someone to move first. That moment may be approaching faster than anyone expected.