The showdown between the U.S. government and Anthropic over military access to Claude models has captivated the tech world, culminating in Anthropic suing the Pentagon over its decision to blacklist the company as a “supply-chain risk,” a move blocked by a federal judge. Sensing an opportunity, OpenAI quickly landed an agreement with the Department of Defense (DoD) to provide its artificial intelligence systems in classified settings.
As the dust settles from this initial skirmish, the ultimate winner might actually be a name that, until now, has largely flown under the radar: Google.
What Is Google’s Deal With the Pentagon?
Google’s deal with the Pentagon involves providing its Gemini AI models for use across military and government operations, including both unclassified and reportedly classified settings. The agreement allows the Department of Defense to use the technology for a wide range of “lawful government purposes,” such as data analysis and mission support, with stated limits on things like domestic surveillance and fully autonomous weapons. However, Google does not have final control over how its AI is used and must make adjustments to its safety settings and filters at the government’s request. The partnership builds on earlier efforts to integrate Google’s technology into Pentagon systems used by defense personnel.
The company has signed an agreement with the Pentagon allowing the Department of Defense to use its AI models for “any lawful government purpose,” including classified work. First reported by The Information, the deal apparently includes limits on domestic mass surveillance and autonomous weapons without proper human oversight, but it also blocks Google from being able to veto how the government uses its technology, and requires that the company make adjustments to its safety settings and filters at the government’s request. The language is similar to that of other deals the Defense Department struck in March 2026 with OpenAI and Elon Musk’s xAI — and later Amazon, Microsoft and Nvidia as well — to use their AI models on classified networks.
Google has confirmed the existence of the contract, but has not publicly commented or expanded on its details.
“We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security,” said Jenn Crider, a Google spokeswoman, told the New York Times. “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”
All told, the move reflects just how essential artificial intelligence has become to government infrastructure, and positions Google for further federal collaboration — potentially giving the tech titan the upper hand over its rivals as Trump pushes for American AI to go global.
What to Know About Google’s AI Deals With the Pentagon
Not much is known about this latest contract between Google and the Pentagon beyond what has been published in The Information, and neither party has publicly commented on its details. However, this isn’t the first deal struck between Google and the DoD.
In December 2025, the company agreed to deploy its Gemini for Government platform on GenAI.mil, the Department of Defense’s enterprise AI platform that is used by 3 million approved civilian and military personnel. Another more recent deal builds on this agreement, expanding government access to include a new platform known as Agent Designer. Added as a feature within Gemini for Government, Agent Designer is a low- and no-code platform that enables users to design, train and deploy AI agents using just natural language. As a result, even those with zero programming experience can create and manage either a single agent that performs one-step tasks or teams of agents that perform multi-step tasks.
Google envisions any government employee applying agents to execute actions that are repetitive, complicated or tedious, such as:
- Establishing step-by-step checklists to track to-do items for projects.
- Writing drafts for white papers, frameworks, meeting notes and other documents.
- Analyzing data within documents and generating reports with key insights.
- Saving files and email attachments by connecting to Google Workspace applications.
The idea is that government personnel can better leverage agentic teams to develop more efficient workflows and raise their productivity. AI agents and the promise they hold are no longer a secret, though, fueling a sense of urgency around Google’s decision to double down on its partnership with the DoD as other tech companies enter the fray.
Why Strike a Government Deal Now?
Several companies are vying to fill the void left by Anthropic and become the next go-to provider for government AI in classified and unclassified settings. Wasting no time after its initial deal with the Pentagon, OpenAI signed another contract granting U.S. defense agencies access to its AI models via Amazon Web Services. And Elon Musk’s xAI already inked a deal that allowed the Pentagon to use its Grok model in classified systems even before the complete breakdown of talks with Anthropic.
Further solidifying its relationship with the Pentagon is then a necessity for Google, as it helps the company stay competitive and maintain a more central role in the U.S. AI landscape.
What This Reveals About the State of the AI Industry
In the immediate aftermath of the Anthropic-Pentagon fallout, dozens of Google employees (including its chief scientist Jeff Dean) signed on to an amici curiae brief supporting Anthropic and its refusal to allow the U.S. government to use its Claude models for domestic mass surveillance and autonomous weapon systems that fire without human oversight. That said, this gesture carries little weight given how Google’s stance on issues like war has evolved over the past decade.
For instance, Google fired 50 employees in April 2024 after protests over the company’s ties to the Israeli government and its military campaigns. Following this move, CEO Sundar Pichai shared a company-wide memo denouncing political initiatives among employees that might get in the way of the company’s success.
“This is a business,” Pichai wrote. “Not a place to act in a way that disrupts coworkers or makes them feel unsafe, to attempt to use the company as a personal platform, or to fight over disruptive issues or debate politics.”
It seems the rest of the tech industry has adopted this attitude as well, agreeing to contracts that cater to the Pentagon’s preferences. Of course, choosing the path of least resistance makes sense when Trump has rewarded businesses that cooperate with his administration with lucrative deals, massive infrastructure projects and opportunities to shape national policy. Meanwhile, Anthropic serves as a reminder of how quickly companies can become political pariahs when they step out of line, further encouraging tech leaders to stay on Trump’s good side.
As the AI race accelerates, concerns about moral values are becoming harder to prioritize for tech companies, and Google can’t afford any slip-ups at a moment when Trump plans on working with U.S. companies to distribute their AI tech stacks across the world. And this strategy of staying out of trouble might just pay off.
Developments in the Google-Pentagon AI Deal
Here is a concise timeline of how Google’s relationship with the Pentagon has evolved — from small, early contracts and industry shakeups to internal employee pushback.
Google Signs Classified AI Deal With the Pentagon (April 2026)
According to reporting from The Information, Google has entered into a classified agreement allowing the U.S. Department of Defense to use its AI models for “any lawful government purpose.” The reported terms include limits on domestic surveillance and autonomous weapons without human oversight, but also indicate Google would not have veto power over how the government ultimately uses its technology. Google has confirmed the existence of the deal, but has not publicly commented on its details.
Google Employees Push Back on Potential Military Use (April 2026)
More than 600 Google employees signed a letter urging CEO Sundar Pichai to reject classified AI work with the Pentagon, warning that such partnerships could enable harmful military applications. The internal backlash echoes earlier employee protests over military contracts and highlights ongoing ethical concerns within the company.
The Pentagon’s Fallout With Anthropic Gives Google an In (March 2026)
The Department of Defense officially designated Anthropic a “supply-chain risk” after the company refused to loosen safeguards on its Claude models, effectively cutting it off from future government contracts. The ongoing dispute created an opening for competitors like Google to step in as defense partners.
Google Strikes a Deal With the Pentagon (December 2025)
Google agreed to deploy its Gemini for Government platform on the Pentagon’s GenAI.mil system, making its models available to millions of approved defense and civilian personnel for unclassified work. The deal also introduced tools like Agent Designer, a low- and no-code platform for building AI agents, laying the groundwork for deeper military integration.
Google Exits Project Maven After Internal Backlash (April 2018)
Google withdrew from the Pentagon’s Project Maven — an initiative that uses machine learning to analyze drone surveillance footage and automatically identify objects like vehicles or people — following widespread employee protest. Although the program continues under other contractors and remains a part of the Pentagon’s broader push to integrate AI into military intelligence, Google’s exit became a defining moment in the debate over tech companies’ role in defense work.
Frequently Asked Questions
What is Google’s Agent Designer tool?
Agent Designer is included as a feature of Google’s Gemini for Government platform. As a low- and no-code platform, Agent Designer enables approved civilian and military personnel to use natural language to build, train and deploy their own AI agents — removing the need for coding experience. The tool is intended to complete unclassified work like writing step-by-step checklists, drafting white papers and generating insights from reports.
Can Google control how the Pentagon uses its AI?
No — under the reported terms of its latest deal with the Pentagon, Google does not have final authority over how its AI is used. The company must adjust safety settings and filters at the government’s request and cannot veto specific use cases once the technology is deployed.
Why is Google expanding its relationship with the Pentagon now?
Google is deepening its partnership in part due to increased competition for government AI contracts after Anthropic lost access to Pentagon work. With rivals like OpenAI and xAI securing defense deals of their own, strengthening ties with the Department of Defense helps Google stay competitive and maintain a central role in the U.S. AI ecosystem.
How has the public reacted to government AI deals?
The American public has largely rewarded Anthropic for standing up to the Pentagon and punished OpenAI for stepping in to secure its own deal. Amid safety concerns surrounding OpenAI’s government contract, users have fled ChatGPT for Claude, making Anthropic’s chatbot the App Store’s top free application. Businesses have also pivoted, with 73 the vast majority of companies preferring Anthropic over OpenAI when investing in enterprise AI for the first time. Given Americans’ anxiety around AI, government use of the technology has been received poorly, especially in military and intelligence contexts
How has the public reacted to government AI deals?
The American public has largely rewarded Anthropic for standing up to the Pentagon and punished OpenAI for stepping in to secure its own deal. Amid safety concerns surrounding OpenAI’s government contract, users have fled ChatGPT for Claude, making Anthropic’s chatbot the App Store’s top free application. Businesses have also pivoted, with 73 the vast majority of companies preferring Anthropic over OpenAI when investing in enterprise AI for the first time. Given Americans’ anxiety around AI, government use of the technology has been received poorly, especially in military and intelligence contexts
