The bitter dispute between Anthropic, the maker of popular AI chatbot Claude, and the United States government over how its cutting-edge artificial intelligence should be deployed in national security operations has come to a conclusion — with the Pentagon ceasing all use of Claude and formally labeling the company a “supply-chain risk.”
“I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” President Donald Trump posted on X. “We don’t need it, we don’t want it, and will not to business with them again!”
Secretary of War Pete Hegseth soon followed up with a post of his own: “In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security,” he wrote. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
What Is the Dispute Between Anthropic and the Pentagon About?
The Pentagon wanted unrestricted access to Anthropic’s Claude models for “all lawful use cases,” but Anthropic refused to allow its models to be used for autonomous weapons that fire without human assistance and mass surveillance in the United States — ultimately resulting in the government dropping Claude and labeling it as a “supply-chain risk.” The conflict came to a head when it was revealed that the Pentagon had used Claude in the capture of Venezuelan President Nicolás Maduro, fueling debate around how the U.S. government can use Anthropic’s tools.
Thanks to its partnership with data analytics giant Palantir, Anthropic had long held the distinction of being the only AI company to offer its services to U.S. defense and intelligence agencies for classified use cases. At the same time, Anthropic has separated itself from its competitors by committing early on to AI safety, putting itself at odds with some of the Pentagon’s operations — specifically its use of AI for mass domestic surveillance and the development of autonomous weapons.
Tensions initially began to boil over when news came out that the Pentagon used Claude during the controversial capture of Venezuelan President Nicolás Maduro, igniting a full-fledged feud with Anthropic over how the U.S. government can use its AI tools. Anthropic refused to remove safeguards on Claude that limit its use in certain defense and surveillance operations, even after the government repeatedly asked the company to do so. In response, the Pentagon moved to terminate its reliance on Claude and applied the “supply-chain risk” designation.
This marks a significant rupture in what had been one of Silicon Valley’s most high-profile national security partnerships — and could alter the rules for how AI companies engage with Washington going forward.
What Did the Pentagon Want From Anthropic?
Pentagon officials had negotiated with OpenAI, Google, xAI and Anthropic to make their respective AI models freely available in unclassified settings. The first three companies agreed to lift all guardrails on ChatGPT, Gemini and Grok for government users, but Anthropic did not. Still, Claude remained the only model permitted for classified uses — until now.
The fact that Anthropic wanted to set any ground rules at all for deploying its models became a sticking point for defense and intelligence agencies, with Emil Michael, a senior Pentagon official who has been steering negotiations with Anthropic and other AI contractors, demanding that the four companies let the government “use any model for all lawful use cases.”
But unbridled government access to Claude without any established safeguards proved to be a non-starter for Anthropic, which has chosen to stand by its AI safety principles. This would therefore require the federal government to be granted permission to use Claude on a case-by-case basis, resulting in a slower approval process that could always end in rejection, which the Pentagon was unwilling to do.
Now, the Trump administration has deemed Anthropic to be a “radical left, woke company,” and is reportedly preparing an executive order targeting the AI startup specifically. Michael also called Anthropic CEO Dario Amodei a “liar” with a “God complex” who was “ok putting our nation’s safety at risk.”
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in Danger, and our National Security in JEOPARDY,” Trump continued in his post. “WE decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is about.”
Why Did Anthropic Push Back?
In 2025, Anthropic signed a two-year deal with the Department of Defense (secondarily know as the Department of War) to collaborate on frontier AI projects in the name of national security. The catch was that the company felt Claude was well-suited for this environment precisely because of its strong policies around AI governance, safety testing and usage. Loosening these standards would compromise the core reason the DoD agreed to such a contract.
As a result, Anthropic refused the Pentagon’s terms of providing its models for all lawful use cases, emphasizing that it won’t allow its technology to be involved in firing autonomous weapons without human oversight and conducting mass surveillance within the United States. Considering that the Trump administration has used drones in the Maduro raid and bolstered immigration enforcement with digital surveillance, Anthropic’s conditions directly clashed with techniques already being greenlighted by the president.
“We do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights,” Anthropic wrote in a statement after the Pentagon’s decision was announced. “Above all else, our priorities are to protect our customers from any disruption caused by these extraordinary events and to work with the Department of War to ensure a smooth transition — for them, for our troops, and for American military operations.”
What Does Being a “Supply-Chain Risk” Mean for Anthropic?
Designating Anthropic as a supply-chain risk is an unprecedented move for an American company, as it has historically been reserved for companies from adversarial countries, such as Chinese tech giant Huawei. This measure has resulted in Anthropic’s contract with the government being canceled, and it requires all other companies working with the Pentagon to verify that they don’t use Claude as well.
The long-term business impact of this decision for Anthropic remains unclear. But for now, the clash seems to have fueled renewed interest in Claude, which managed to dethrone ChatGPT as the top downloaded app in the United States shortly after the “supply-chain risk” news broke. Claude also remains one of the most powerful models on the market today, and a popular choice for enterprises and individual users alike.
For its part, Anthropic is suing the DoD over the supply-chain risk designation, and rejected Hegseth’s claim that military contractors would be barred from working with the company. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the company said.
The Pentagon Has Moved on to OpenAI
Within hours of Trump ordering federal agencies to cut ties with Anthropic, OpenAI moved to fill the void, announcing a deal with the Pentagon that allows its models to be deployed in classified environments.
In a blog post outlining the agreement, OpenAI said its models cannot be used in mass surveillance of Americans, autonomous weapons systems and “high-stakes automated decisions (e.g. systems such as ‘social credit’)” — similar to the stipulations Anthropic had. However, OpenAI said unlike other AI companies that have “reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments,” this deal protects its red lines “through a more expansive, multi-layered approach.”
“We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections,” the blog said. “This is all in addition to the strong existing protections in U.S. law.”
There have been some mixed reactions as to whether or not this deal does indeed protect against things like mass domestic surveillance. However, Katrina Mulligan, OpenAI’s head of national security partnerships, said in a LinkedIn post that much of the discussion around the language of its contract assumes “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single usage policy provision in a single contract with the Department of War.”
“That’s not how any of this works,” Mulligan said, stating, “Deployment architecture matters more than contract language. …By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware.”
Altman — who initially came out in support of Anthropic’s refusal — also fielded questions about the deal on X, where he admitted it had been rushed and resulted in significant backlash against OpenAI. Hundreds of OpenAI and Google employees have since singed signed a petition calling on their respective companies to mirror Anthropic’s position.
Frequently Asked Questions
Why did the Pentagon blacklist Anthropic?
The Pentagon has labeled Anthropic a “supply-chain risk” after the company refused to give the government unrestricted access to its Claude AI models. Anthropic would not allow its technology to be used for fully autonomous weapons or mass domestic surveillance, ultimately leading federal agencies to cut ties.
What does it mean to be labeled a “supply-chain risk” by the federal government?
A “supply-chain risk” designation restricts or blocks a company from doing business with the U.S. military and its contractors. In Anthropic’s case, it resulted in the cancellation of government contracts and limits on contractors’ ability to use Claude in defense-related work.
What was the dispute between the Pentagon and Anthropic about?
The conflict centered on whether the Pentagon could use Anthropic’s Claude models for “all lawful use cases.” Anthropic insisted on maintaining guardrails that prohibit autonomous weapons without human oversight and mass surveillance of Americans. The government wanted broader access. In the end, Anthropic stood firm, resulting in the Pentagon designating the company as a “supply-chain risk” and cutting ties completely.
Could Anthropic challenge the Pentagon on its decision?
Yes. Anthropic has announced that it is challenging the “supply-chain risk” designation in court, claiming it is being unfairly targeted on ideological grounds. The company argues that its safety policies are essential protections, not political statements, and says it remains open to national security work under enforceable safeguards.
