Why Did the Pentagon Turn on Anthropic?

Beyond labels like “woke AI,” the battle between Anthropic and government tells us the era of AI as infrastructure is firmly here. Our expert weighs in.

Written by Ilman Shazhaev
Published on Mar. 03, 2026
The Pentagon
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Mar 02, 2026
Summary: The Pentagon and Anthropic are clashing over what constitutes "lawful use" of AI in national security. U.S. officials are using labels like "woke AI" and "supply chain risk" to pressure the lab to loosen restrictions on autonomous lethal use. The fight highlights a growing battle for control over AI infrastructure.

When software becomes infrastructure, speed  becomes less important than control. Whoever writes the operating rules ends up deciding what “lawful” means under real pressure.

This exact fight is playing out right now between the U.S. Department of War and Anthropic, the company behind Claude, a frontier AI language model already being pulled into national security workflows. Defense officials want broader “lawful use” with fewer policy restrictions. Anthropic wants binding limits that rule out autonomous lethal use without meaningful human oversight and reject large-scale domestic surveillance.

The contract battle is turning into a PR battle, and the language is getting sharper. David Sacks, serving as the White House AI and crypto czar, has accused Anthropic of backing “woke AI,” while Pentagon officials designated the company as a “supply chain risk.”

Those phrases are instruments. One strips legitimacy from the company’s guardrails. The other threatens to make the company radioactive inside the procurement ecosystem. Pressure like that only starts when the government relies on the tool and can’t easily replace it. At that point, it tries to force the company to change its rules.

Because dependencies like that rarely stay neutral for long.

The Battle for AI Control: Pentagon vs. Anthropic

The U.S. Department of War and Anthropic are currently locked in a high-stakes dispute over what constitutes the “lawful use” of AI in national security. As frontier models like Claude become mission-critical infrastructure, a power struggle has emerged over who defines the rules of engagement.

Key Points of Contention:

  • Lethal Autonomy: Anthropic seeks binding limits to prevent autonomous lethal use without human oversight.

  • Domestic Surveillance: The lab rejects the use of its models for large-scale domestic monitoring.

  • Procurement Pressure: Officials have labeled Anthropic’s guardrails as woke AI" and framed the company as a supply chain risk" to force policy changes.

  • Strategic Independence: The conflict highlights a shift where AI is no longer just a vendor tool, but strategic infrastructure that the government seeks to control.

The government is reframing the company’s right to say no to certain use cases as a national security vulnerability, blurring the line between private innovation and state-aligned infrastructure.

More From Ilman ShazhaevCollaborative AI Is Already Live in Games. Are AI Builders Paying Attention?

 

How ‘Woke’ Became the Pressure Tactic

The real control in an AI system lives in the rules wrapped around the model, the policy layer that decides what it will do when the stakes are high. The policy sounds like contract fine print until it blocks a request. In an urgent workflow, the rules determine whether the model answers, refuses or sends the task in for extra review.

Frontier AI labs don’t build guardrails to lecture users, but because general-purpose models are easy to misuse and because trust collapses when behavior turns unpredictable under pressure. Clear boundaries reduce preventable risk.

Political actors reframe those guardrails as ideology. Limits meant to stop harmful behavior get labeled “woke,” as if the company is taking a partisan stance instead of managing risk.

The accusation doesn’t need to be fair. It only needs to be memorable. The framing turns “we won’t allow this use” into “they’re imposing beliefs,” then uses the stigma to pressure the lab into loosening the rules.

The attack is really on the right to say no. Once refusal is treated as ideology, any boundary can be recast as defiance. The debate stops being about risk and turns into a test of obedience.

Public attacks are only one part of the pressure. The government can also use procurement tools to punish a company, such as warning contractors not to use its model. That kind of pressure can change behavior faster than any headline.
 

‘Supply Chain Risk’ Is the Lever That Actually Bites

When officials reach for “supply chain risk,” they’re framing a company’s choice to preserve its governance independence as risk of a national security exposure. 

A frontier model inside defense or intelligence workflows becomes more than a vendor relationship. Teams integrate it into secure systems and build routines around it. Over time, planning starts to assume the model will be available and will behave in a predictable way.

Strategic vulnerability starts there. The government can’t simply order the model to behave differently, yet the model’s owner can change the rules that control what it will answer or refuse. A private lab can change its rules faster than a government can rewrite contracts or retrain teams. If the lab tightens a restriction, even slightly, a workflow that relied on the model can break overnight. A task that used to work can start getting refused, delayed or pushed into extra review.

Anthropic’s ability to set and enforce its own red lines is the point. From Washington’s perspective, a mission-critical capability that can decide what it will refuse creates uncertainty. From Anthropic’s perspective, the moment a government can rewrite those red lines through procurement pressure, the company stops being a lab and starts looking like an arm of the state.

More on Anthropic vs. the PentagonThe Pentagon Has Officially Blacklisted Anthropic. Here’s Why.

 

Every Lab Will Face Procurement Questions

Beyond Anthropic, the takeaway is that AI has matured into strategic infrastructure, and infrastructure always attracts control fights.

Defense is the most visible arena, but markets are registering the shift too. A viral research note about AI-driven job displacement helped fuel an 800-point Dow sell-off in a single session. When markets swing on a possible scenario rather than earnings, the technology has entered macroeconomic territory.

More frontier models will end up inside public-sector workflows, sometimes directly and often through contractors. As that happens, labs will face escalating demands for stability and predictability under stress. Government buyers will want terms that reduce surprise and increase their control over the technology. Labs will want boundaries that protect their legitimacy and their broader market access.

The line between private innovation and state-aligned infrastructure will keep blurring, even if nobody announces it. You’ll see it in contract language, procurement pressure and the public narratives used to justify both.

For anyone building autonomous AI systems (and I spend most of my time doing exactly that) the lesson is concrete. Governance isn’t a sidebar to capability. It’s the layer where trust is built or broken. The companies that treat their red lines as a product feature, not a PR position, will be the ones governments and markets can plan around.

Anthropic shouldn’t abandon its red line on autonomous lethal use. Refusing “all lawful use” in the abstract invites exactly the backlash we’re watching, because planners hear uncertainty.

A company in Anthropic’s position should translate principles into contract-ready terms. Define “human oversight” in operational language. Require meaningful human authorization at the point of lethal action and accept audits for compliance. Offer clear allowances for defensive and non-lethal use, plus a fast escalation process for edge cases so “no” doesn’t arrive as a surprise in a crisis.

Governments can live with boundaries when they’re predictable. Markets can live with red lines when they’re enforceable. The labs that treat constraints as product requirements, with clear definitions and verification, will be the ones Washington and enterprise buyers can plan around.

Explore Job Matches.