Why Are So Many Companies Selling AI Products to the U.S. Government for $1?

Major players in the space are offering a discount on services for the government as part of a long-term strategy.

Written by Ahmad Shadid
Published on Sep. 03, 2025
A robotic hand holds a $1 bill
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Aug 29, 2025
Summary: OpenAI, Anthropic and Google are offering AI tools to U.S. government agencies at token prices — as low as $1 or $0.47 — in a bid to secure long-term federal contracts, raising questions about security, compliance and taxpayer safeguards.

The tech giants that the world has known to gatekeep their artificial intelligence products are now suddenly offering the U.S. government AI deals for next to nothing. OpenAI offered ChatGPT Enterprise to government agencies at $1 a year earlier this month. 

Soon, the dollar became a headline, with Anthropic offering Claude for Government and Claude for Enterprise for the same price to all three branches of the U.S. government. Google also offered Gemini for Government at $0.47.

Why Are AI Companies Offering the U.S. Government $1 Deals?

Major AI firms like OpenAI, Anthropic and Google are pitching enterprise-grade tools to U.S. agencies at token prices — as little as $1. The strategy mirrors Palantir and SpaceX’s playbook: Land footholds through low-cost pilots, then expand into lucrative long-term federal contracts where compliance, security and standard-setting give firms both revenue and influence.

More From Ahmad ShadidSolving the AI ‘Body’ Problem Is Crucial to Unleashing Its Power

 

A Long Play, Not a Discount

Beyond the shock of these prices, a greater question is why these companies are offering enterprise products at a giveaway price. It is a strategy. These firms are betting that a token price tag today will lock them into far more lucrative arrangements tomorrow, following a path blazed by Palantir and SpaceX.

The federal government is the most coveted customer in tech. A single defense or infrastructure contract can reshape a company’s fortunes, turning loss-making ventures into enduring enterprises. For instance, Palantir signed a 10-year contract with the U.S. Army worth approximately $10 billion. SpaceX also has similar contracts, dealing with products such as Starlink and Starshield. 

In total, SpaceX has approximately $22 billion in federal contracts. Given that, these supposed offerings of freemium AI products look too good to be true. Anthropic, OpenAI and xAI could benefit from government signs similar to those of SpaceX and Palantir. They might be following a strategy to offer freemium packages for a trial period, possibly to remove procurement friction before lunging at bigger opportunities. For these firms, which remain heavily unprofitable, securing a foothold inside federal agencies may be less about this year’s revenue and more about long-term survival.

A few things to consider during this period include the scope of the AI services being offered, the political landscape, the possible premium tiers that might pop up, and the standard setting for the products to meet government needs. 

So far, the government is centralizing software purchases through the GSA “OneGov” program. This strategy creates government standards for products similar to those Anthropic, xAI and Google are offering. Objectively, such products' initial price is massive and typically costs millions, if not billions. Therefore, these tech companies are competitors trying to get into the same space as their more established counterparts. 

The first year of a contract is arguably a period of setting roots in a lucrative market before turning the products into paid versions. The same model helped Palantir, which began with small-scale pilot projects before securing extensive defense and intelligence deals. 

Of course, the most obvious counterargument to this analysis is that the companies are doing it out of goodwill. Better still, these could be convenient pilot programs that companies can participate in and exit anytime. We could also fairly assume that the government is exerting influence to gain vital technology at very low prices.

While perfectly plausible, both arguments are too timid when considering AI agents and their entrenchment power when it comes to support tasks, including research, case management and workflow support. That is, once government agencies begin to rely on AI products for support, these services will no longer be on a trial basis; they will change how the agencies work.

For example, Palantir’s Gotham is a central part of how agencies like ICE and Homeland Security do their work. It is crucial to note that such agencies are rebuilding their entire workflows and systems around the company’s products.

These counterarguments also ignore the fierce competition in the AI space. This competition significantly affects the profitability of AI products, even from established big tech companies like Google. Although Google and Anthropic are experiencing an estimated surge in earnings from AI products, xAI is not yet on the same level. This fact solidifies why more companies in the space might require support from government agencies. 

For xAI, which is saddled with debt and scrambling to distinguish itself from OpenAI, a government endorsement would be nothing short of life support. For Anthropic and Google, it offers not just revenue but prestige, an official seal of approval in a market where credibility is currency.

 

The Security Gamble With Cheap AI

It goes without saying that information circulating in government agencies requires high levels of security. Unfortunately, the public has insufficient trust in AI products and the data used to train the models. 

OpenAI has publicly released statements insisting that enterprise data is not used for AI model training by default. Anthropic also claims that such inputs and outputs are only used with explicit permission. Google says it will align with FedRAMP compliance, and its stack will meet compliance posture requirements. These statements are largely cautious, and rightfully so. But at the end of the day, while still necessary, the phrase “by default” doesn’t set these promises in stone.

With this in mind, it is important to also note the issues that still exist with AI product procurement. In the U.S., the General Services Administration (GSA) recently initiated its OneGov strategy to deal directly with original equipment manufacturers (OEMs) when procuring software for federal government use. This move has sparked some outcry from resellers, however, since they have traditionally been part of the procurement chain. Challengers fear the new arrangement may put them out of business.

These issues already raise questions about whether or not such procurement pathways can meet stricter mission requirements. A good example is the need for defense and intelligence software buyers to meet extra-high compliance standards, especially when dealing with sensitive classified workloads. These mandates often range from FedRAMP High to Impact Level Five. 

So, agencies like the DoJ may be unable to run their workloads through shared commercial software despite the distributors’ confidence in the privacy of their operations. It would also require these tech companies to invest billions into creating proper AI infrastructure to meet the needs of such agencies.

As such, the freemium options may be an attempt by the AI companies to reassure government agencies about the safety of their products.

Additionally, the aggressive pricing shows just how fierce the contest for government favor has become. Unlike the private sector, where customers can be fickle, federal contracts offer both steady revenue and political leverage. 

 

What Taxpayers Should Ask

The government’s appetite for AI is understandable. From predictive supply chain management to intelligence analysis, the promise is real. But taxpayers should demand clarity on two fronts. First, what safeguards are in place to prevent sensitive data from becoming training fodder? Second, are agencies measuring effectiveness rigorously before signing long-term, high-value contracts?

Although these guardrails don’t cover all the potential pitfalls created by the freemium options, they can show some obvious implications for the companies. The governments normalization of the use of Claude, Gemini and ChatGPT opens an opportunity for increased demand for verifiable data lineage and tamper-evident auditability of AI outputs. 

This structure is also a logical starting point for creating blockchains that can pin prompts, responses and versions of models in immutable logs. Such blockchains will be helpful with FOIA, evidentiary standards, audits and other government-based, information-heavy processes. These blockchains potential to become popular would also disadvantage decentralized AI networks in meeting government standards, however, including approval from FedRAMP-level controls, without requiring extensive interrogation. 

It could also mean that more traditional systems would be the most probable winner, including NIST and agency ATO-hardened clouds, identity providers and vendors capable of interpreting AI workflows.

You can also expect markets to react accordingly. If the government agencies renew their contracts with favored AI product providers en masse, there might be several premium products for favored AI product providers. Take, for instance, the cases of Palantir and Leidos offering high-level and more risk-resistant premiums due to government spending. In most cases, once these product offers stick, they can only grow bigger due to governmental support. Similarly, Google, Anthropic, and xAI could introduce such premiums if the government invested in their products for another year.

Likewise, AI-based tokens, especially those backing these companies, can expect a surge in community interest and possibly prices. That still heavily depends on whether a second-year renewal will happen, however. So, we can likely expect to learn more in August and September 2026.

More on Government ContractsInside Palantir: The Tech Giant Powering Government Intelligence

 

What We Should Watch For

Considering the breadth of AI products, Google is already offering the most at the lowest price. More government involvement could mean higher prices and more compliance demands, however. The standard setting for these products will also determine their costs and how allies will be influenced by similar AI product adaptations. 

It is also crucial to consider whether the companies might start using government-based information in AI model training, which could further influence compliance demands for AI products. Lastly, how exactly will tiers change to offer more security for government agencies? 

These factors could suggest more implications as far as AI products are concerned and that current $1 offerings might just be touching the surface. It could also mean more dents in how everyone thinks of data privacy in the coming months.

Explore Job Matches.