Is OpenAI’s Pentagon Deal as Safe as it Sounds?

The U.S. military can now deploy OpenAI’s technology in classified operations. The contract promises safeguards against mass surveillance of Americans and autonomous weapons. Critics question just how strong those protections really are.

Written by Ellen Glover
Published on Mar. 04, 2026
Photo of Sam Altman in front of an OpenAI logo
Image: Shutterstock
REVIEWED BY
Sara B.T. Thiel | Mar 04, 2026
Summary: The Pentagon has signed a deal with OpenAI to deploy its models in classified defense work after rejecting Anthropic’s stricter terms. OpenAI says its its contract protects against mass surveillance and autonomous weapons, but critics question whether those limits hold under existing laws and policies.

OpenAI, the maker of ChatGPT, has entered into a contract with the Department of Defense that will allow its AI models to be used for classified operations. This is a first for the company, which until now had only provided its services for unclassified government work.

What Does OpenAI’s Deal With the Pentagon Say?

OpenAI’s deal with the Pentagon allows its AI models to be used for classified military operations. The contract includes “red lines” prohibiting domestic mass surveillance, autonomous weapons without human control and “high-stakes” AI decisions without human approval, while emphasizing compliance with existing U.S. laws and Defense Department policies.

News of the deal broke just hours after negotiations for a similar contract between the Pentagon and rival AI firm Anthropic collapsed. Throughout that week, Anthropic had been engaged in a very public tussle with the Defense Department over how its Claude models could be used, with government officials demanding the ability to freely access the technology for all lawful defense and intelligence purposes. Anthropic refused, insisting on contractual guarantees that its models would not be used to surveil Americans or deploy autonomous lethal weapons. But the government insisted that a private company does not have the right to dictate how it conducts national security operations.

When the two sides failed to reach an agreement at the end of February, President Donald Trump directed all federal agencies to stop using Anthropic’s technology, calling it a “radical left, woke company.” Soon after, Defense Secretary Pete Hegseth directed the DoD to formally declare Anthropic a “supply-chain risk to national security,” a designation typically reserved for companies from adversarial nations that effectively bars Anthropic from doing business of any kind with the U.S. government or its contractors going forward.

As relations between Anthropic and the Pentagon soured, OpenAI entered its own negotiations with the Defense Department, even after CEO Sam Altman publicly backed Anthropic’s stance. Still, OpenAI was able to close the deal, claiming its contract includes language prohibiting the use of AI for domestic mass surveillance and autonomous weapons — guardrails the company says align with both its safety policy and the government’s goals. 

“We think the U.S. military absolutely needs strong AI models to support their mission, especially in the face of growing threats from potential adversaries who are increasingly integrating AI technologies in their systems,” OpenAI said in a press release. “We originally did not jump into a contract for classified deployment, as we did not feel that our safeguards and systems were ready, and have been working hard to ensure that a classified deployment can happen with safeguards to ensure that red lines are not crossed.”

Nevertheless, OpenAI received a lot of public backlash for its decision. In the days after the announcement, ChatGPT uninstalls surged by nearly 300 percent while U.S. downloads of Claude skyrocketed, causing it to supplant ChatGPT as the No. 1 app in the country for a while. Critics across social media began questioning the company’s insistence that this deal preserved OpenAI’s safety guardrails. After all, why would the Pentagon suddenly agree to terms it had so recently rejected from Anthropic? Did the government cave here? Did OpenAI?

Related ReadingThe Pentagon Has Officially Blacklisted Anthropic. Here’s Why.

 

Key Details of the OpenAI-Pentagon Deal

OpenAI’s agreement with the Pentagon allows its AI models to be deployed on classified national security operations. The contract enables the military to use this technology for “all lawful purposes,” but OpenAI has emphasized that the deal is structured around strict guardrails rooted in existing U.S. regulations and Defense Department policies. 

According to an OpenAI blog post outlining the agreement, the contract has three explicit “red lines” that cannot be crossed:

  1. OpenAI models must not be used in the mass surveillance of Americans.
  2. OpenAI models must not be used to direct autonomous weapons systems in cases where policy or law requires human control.
  3. OpenAI models must not be used in “high-stakes autonomous decisions” that require human approval.

The company also says that any use of its models must comply with all applicable constitutional protections, surveillance laws and Pentagon policies, including the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence and Surveillance (FISA) Act of 1978 and Executive Order 12333. After some initial criticism from the public, OpenAI further amended the contract to clarify that its technology would not be available to intelligence agencies like the National Security Agency (NSA), and that they would need a separate agreement to gain access. 

To enforce these limits, OpenAI says its models will run through controlled cloud systems and remain subject to its internal safety controls, or “safety stack,” as it calls them. The company has explicitly stated that it is not providing the Pentagon with “guardrails off” or “non-safety trained models,” nor is it deploying models in edge AI devices, where they could possibly be used for autonomous lethal weapons. 

To ensure proper oversight, some OpenAI employees will receive security clearances to check in on the systems. And the company will introduce “classifiers,” or small models that can monitor large models, potentially blocking them from performing specific actions.

Related ReadingIs AI Shaping Up to Be America’s Next Political Fault Line?

 

How Is OpenAI’s Contract Different From Anthropic’s?

Of course, given that neither OpenAI nor Anthropic’s contracts with the DoD have been publicly released in their entirety, a thorough comparison is impossible. But at a high level, both companies say they oppose mass surveillance of Americans and fully autonomous weapons. The key distinction is how they draw the line. Anthropic sought explicit contractual bans on those uses, regardless of whether the government argued they were legal. OpenAI, on the other hand, allows its models to be used for “all lawful purposes.” In practice, that means if surveillance or weapons deployment fall within existing U.S. law or Pentagon policy, OpenAI’s contract would permit them and Anthropic’s would not.

The differences in the language are subtle, but they’re there. For example, Altman says OpenAI’s contract requires “human responsibility” in the use of autonomous weapons, while Anthropic says it requires “proper oversight” and “guardrails” — protections it argues don’t currently exist. It’s difficult to parse what exactly those phrases mean without seeing the full contractual definitions, but “human responsibility” could imply accountability after the fact, whereas Anthropic’s push for oversight suggests humans would need to be involved before or during any decision to use lethal force.

Even so, OpenAI claims its contract “provides better guarantees and more responsible safeguards” than any previous agreement for classified AI deployment, including Anthropic’s. Unlike other AI companies that have “reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments,” OpenAI says its deal protects its “red lines” through a more “expansive multi-layered approach.” 

“We think our red lines are more enforceable here because deployment is limited to cloud-only (not at the edge), keeps our safety stack working in the way we think is best, and keeps cleared OpenAI personnel in the loop,” OpenAI said. “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it.”

Nobody from Anthropic has publicly commented on OpenAI’s deal. The company said in a blog post that it had negotiated “in good faith” with the Pentagon but ultimately reached an “impasse” when it came to mass surveillance of Americans and fully autonomous weapons

“We held to our exceptions for two reasons. First, we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights,” the company said. “No amount of intimidation or punishment from the Department of War will change our position on mass surveillance or fully autonomous weapons.”

Related ReadingWhy Did the Pentagon Turn on Anthropic?

 

Legal and Ethical Concerns With the OpenAI-Pentagon Deal

While OpenAI claims its Pentagon agreement has substantial “red lines” to protect against domestic surveillance and autonomous weapons use, several AI experts, privacy advocates and even former OpenAI employees are claiming those lines may not be as solid in practice as they appear on paper. 

Much of the controversy surrounding this contract has to do with its use of the phrase “any lawful use,” which essentially means it’s abiding by existing legislation and Pentagon policies. But Pentagon policies can be changed at any time. And agencies have been known to reinterpret existing laws in ways that basically give them new powers. What’s more, existing laws already allow the government to collect, analyze and even purchase a vast array of data on citizens, including their location, social media posts, phone records and much more. Add a layer of artificial intelligence on top of that, and the government could potentially conduct widespread surveillance operations with unprecedented levels of detail — and all within the bounds of the law. 

In fact, according to Mike Masnick, founder of tech industry blog Techdirt, Executive Order 12333 is what allows the NSA to “hide its domestic surveillance” by “tapping into [phone] lines *outside the US* even if it contains info from/on US persons.” Despite years of promises from intelligence agencies to reform and some periodic attempts from lawmakers to change the rules, the government’s surveillance powers have largely stayed intact, meaning that OpenAI’s commitment to follow “existing law” may not be much of a safeguard after all.

“The intelligence law section of this is very persuasive if you don’t realize that every bad intelligence scandal in the last 30 years had a legal memo saying it complied with those authorities,” Dave Kasten, Palisade Research’s head of policy wrote of OpenAI’s agreement on X.

For its part, OpenAI says it has amended its contract to ensure its technology could not be used by intelligence agencies like the NSA, and says it prohibits any “deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” The company also says it “explicitly references the surveillance and autonomous weapons laws and policies as they exist today,” meaning that even if those rules change in the future, its systems must remain aligned with current standards.

OpenAI’s rules around lethal autonomous weapons also have some gaps, according to critics. The contract states OpenAI’s technology “will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control,” putting it in compliance with a 2023 Department of Defense directive. Anthropic, meanwhile, sought to ban the use of these systems entirely — at least until the technology is deemed ready — claiming that while they “may prove critical to our national defense,” today’s AI systems are “simply not reliable enough to power fully autonomous weapons.”

The only other restriction OpenAI appears to have placed on AI-enabled weapons is that it would only allow its models to run in the cloud, not on “edge” devices that process data locally, where the company says there’s a greater “possibility of usage” for autonomous lethal weapons. But an unnamed source told The Verge that this distinction may offer little real protection for either of OpenAI’s “red lines.” Mass surveillance requires such large volumes of data that’s virtually impossible to carry out anywhere other than the cloud, according to the source, and much of an autonomous weapon’s so-called “kill chain” (identification, tracking, analysis etc.) is carried out with large algorithms in the cloud, not the device itself. So, even if OpenAI’s technology isn’t actually pulling the trigger so to speak, it could very well be powering everything leading up to that point.

It’s this kind of carefully crafted, often vague language that makes it difficult to not only understand what the contract truly allows, but also how much real control OpenAI and its guardrails have in practice, Sarah Shoker, a senior research scholar at the University of California, Berkeley and former head of OpenAI’s geopolitics team, told The Verge.

“The use of the word ‘unconstrained,’ the use of the word ‘generalized,’ ‘open-ended’ manner — that’s not a complete prohibition. That is language that’s designed to allow optionality for leadership,” Shoker said. “It allows leaders also not to lie to their employees in the event that the Pentagon does use the LLM in a legal manner without OpenAI leadership’s knowledge.”

Related ReadingAI Figher Jets Are Almost Here — And War May Never Be the Same

 

What Does OpenAI Say About All This?

In a lengthy LinkedIn post, OpenAI’s head of national security partnerships Katrina Mulligan said that much of the discussion surrounding the company’s contract wrongfully assumes “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single usage policy provision in a single contract with the Department of War.”

“That’s not how any of this works,” Mulligan wrote. “The Department was not asking us to modify how our models behave. Their position was, build the model however you want, refuse whatever requests you want, just don’t try to govern our operational decisions through usage policies. For whatever risk surface area remains, our safety stack, refusal policies, and guardrails become another protection. And those technical controls are often more reliable than contract clauses anyway. Our contract gives us control over the models and safety stack we deploy, and the ability to improve them over time.”

“U.S. law already constrains the worst outcomes,” she continued. “We accepted the ‘all lawful uses’ language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.”

Overall, OpenAI has portrayed this agreement with the Pentagon as a way to responsibly bring advanced artificial intelligence into national defense — as well as “de-escalate” the situation between the Pentagon and Anthropic. The company says it pushed for the same contract terms to be made available to “all AI labs” and urged the government to “try to resolve things with Anthropic,” stating that “the current state is a very bad way to kick off this next phase of collaboration between the government and AI labs.”

Frequently Asked Questions

Both companies say they oppose mass surveillance and fully autonomous weapons, but Anthropic sought outright bans, while OpenAI allows its models to be used for “any lawful purpose.” In practice, this means OpenAI’s technology could be used for activities considered legal under current U.S. law, whereas Anthropic would not permit them.

The contract prohibits the use of OpenAI’s models to independently control lethal weapons where law or policy requires human oversight. However, critics have noted that AI could still power the “kill chain” (targeting, tracking and analysis), even if humans make the final firing decision.

OpenAI says its contract explicitly forbids using its AI for mass domestic surveillance. Its models must comply with existing U.S. laws, surveillance regulations and Pentagon policies, and they are not available to intelligence agencies like the NSA under the current deal. But critics warn that the “any lawful use” language could allow broad data collection under current laws, meaning OpenAI could still indirectly support large-scale surveillance activities within legal boundaries.

Explore Job Matches.