The U.S. Department of Defense announced on Friday that it has reached agreements with seven leading artificial intelligence companies—SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, and Amazon Web Services—to deploy their technologies for military purposes. These contracts represent a significant escalation in the Pentagon's commitment to establishing what it calls an 'AI-first fighting force,' with implications that extend far beyond military strategy into questions of AI governance, corporate responsibility, and the future of weapons systems.

The agreements grant the Pentagon access to deploy these companies' AI technologies for 'any lawful use,' according to the Defense Department's official statement. This language is crucial. It signals the military's intention to integrate AI across all domains of warfare—from intelligence gathering and autonomous weapons systems to classified information networks and strategic decision-making. The Pentagon is budgeting $54 billion specifically for autonomous weapons development, a figure that underscores the scale of this transformation.

The Anthropic Question

What makes these agreements particularly notable is what they reveal about the fractures emerging within the AI industry over military applications. Anthropic, the maker of the popular Claude chatbot, notably refused to sign a similar contract. The startup's objection centered on the Pentagon's insistence on the 'any lawful use' clause, which Anthropic argued could enable domestic mass surveillance or the development of fully autonomous lethal weapons systems. Rather than capitulate, Anthropic sued the Defense Department, and the Pentagon responded by labeling the company a 'supply-chain risk'—the first time an American company has received such a designation.

The Pentagon's strategy appears designed to pressure Anthropic back to the negotiating table. By signing with Anthropic's rivals, the Defense Department hopes to demonstrate that it has alternatives—that Anthropic's principled stand, however admirable, is ultimately irrelevant to the military's technological roadmap. Yet Anthropic's latest AI model, Mythos, has complicated this calculus. The model, designed specifically for cybersecurity applications, has rattled government officials with its ability to identify vulnerabilities in well-tested software.

Geopolitical Dimensions

Among the companies that did sign, Reflection AI stands out as the most intriguing. The two-year-old startup has yet to release a publicly available model, yet it is seeking a $25 billion valuation and has received backing from Nvidia as well as 1789 Capital, the venture fund where Donald Trump Jr. is a partner. Reflection's mission is to develop open-source AI models as a counter to Chinese AI firms like DeepSeek. The Pentagon's decision to include Reflection in its agreements signals a broader geopolitical strategy: the U.S. military is not just acquiring AI capabilities; it is investing in the domestic AI ecosystem to maintain technological superiority over China.

The integration of these AI systems into the Pentagon's 'Impact Levels 6 and 7' network environments represents a significant technical and organizational shift. These environments are designed to 'streamline data synthesis, elevate situational understanding, and augment warfighter decision-making in complex operational environments,' according to federal statements. In plain language, this means AI systems will be embedded in real-time military decision-making, from tactical operations to strategic planning.

"These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force and will strengthen our warfighters' ability to maintain decision superiority across all domains of warfare."

— U.S. Department of Defense, May 1, 2026