In an unlikely turn of events, OpenAI, Anthropic and Google are joining forces to combat distillation attacks allegedly carried out by their Chinese competitors. Model distillation is a common technique in the artificial intelligence industry, where a smaller AI model is trained on the outputs of a larger, pre-trained model. This way, companies can develop cheaper models that mirror the capabilities of their more powerful counterparts.
Why Are OpenAI, Anthropic and Google Teaming Up?
OpenAI, Anthropic and Google are partnering to share information on how to prevent adversarial distillation attacks — covert attempts by other organizations to train smaller models on the outputs of these companies’ models. OpenAI and Anthropic have accused several Chinese startups of carrying out these attacks, prompting them to draw attention to this practice through the Frontier Model Forum, a platform the three companies founded with Microsoft.
Distilling a company’s model without its approval is a major affront, though, which is why OpenAI, Anthropic and Google are sounding the alarm. It’s reached the point where the tech titans have shared details on how to address “adversarial distillation” attacks through the Frontier Model Forum, an industry group the trio formed alongside Microsoft in 2023.
With the United States and China locked in a tight battle for technological dominance that could tilt in either direction at any given moment, this three-pronged alliance may signal the start of a new era — one where national security interests take precedence over tech innovation and redefine how the AI industry operates going forward.
This All Started With DeepSeek
Chinese AI startup DeepSeek took the tech industry by storm in early 2025 with the release of its DeepSeek-R1 model, which rivaled the performance of models made by American tech giants at a fraction of the cost. U.S. tech stocks plummeted as a result, spurring a renewed push among top AI players to expand their infrastructure and reestablish their edge over smaller competitors like DeepSeek. But this Cinderella story took an ugly turn when OpenAI CEO Sam Altman accused DeepSeek of “inappropriately” distilling the company’s models to train R1.
This is where adversarial distillation comes into play. According to the Frontier Model Forum, malicious actors can secretly train smaller models on the outputs of a larger model by submitting prompts that elicit specific responses. These outputs can be used to understand how a large language model produces its answer, revealing its chain-of-thought reasoning. They may also be used to generate synthetic data for additional training and serve as a reference point for a smaller model to compare its own responses to, aiding in reinforcement learning.
This method can be used to design much cheaper models without paying a company any compensation, while also directly competing with its original models. Such serious allegations have then tainted DeepSeek’s sudden rise to stardom, and tensions have only grown as more voices join OpenAI in crying foul over R1’s accomplishments.
Why Are OpenAI, Anthropic and Google Teaming Up Now?
OpenAI has doubled down on its claims, with Altman more recently accusing DeepSeek of continuing to “free-ride on the capabilities developed by OpenAI and other U.S. frontier labs.” The difference this time around is that OpenAI isn’t alone in airing its grievances.
Anthropic has widened the scope of the problem, sharing a blog post that lists Chinese AI labs DeepSeek, Moonshot and MiniMax as guilty of illegally distilling its Claude models to replicate their capabilities in computer vision, agentic reasoning and agentic coding, among other areas. And while Google hasn’t named any names, it released a 2026 report that notes an increase in “distillation attacks” on its AI models in the last quarter of 2025.
These companies may have their own AI ambitions, but they have seemingly set them aside in the face of a shared foe. Now that these tech titans are on the same team, the stage is set for the AI race to take on a more nationalistic tone, as American tech leaders move to counter China’s technological gains.
What Does This Partnership Mean for the Global AI Race?
The OpenAI-Anthropic-Google partnership comes at a time when America’s lead in the AI race is more precarious than ever. With countries investing in their own tech stacks, Europe is considering overhauling its regulatory framework to boost companies like Mistral, while the Middle East is reproducing DeepSeek’s success with models like K2 Think. Still, the ultimate threat to U.S. tech supremacy is China, which has established a substantial lead in the humanoid robotics sector and could go on to control the growing physical AI niche.
Amid a political landscape wracked by President Trump’s global tariffs and military actions, more powerful models could become national security threats. Anthropic has limited the rollout of its Mythos model to a handful of tech companies due to concerns over its hacking abilities, and OpenAI will follow suit when it releases a new cybersecurity product. After all, if other countries can distill these models to design their own models for potent software and weapons, that would negate the advantage the U.S. has built through its own AI-military partnerships.
Broad model releases could then be increasingly replaced by industry alliances, where a few companies exchange resources to protect their products and sharpen their competitive edge at the expense of everyone else. OpenAI, Anthropic and Google may need to share the spoils if their team strategy pays off, but it may be a small sacrifice compared to losing to overseas startups. And if their approach is successful, other companies may seek out allies of their own with the blessing of their respective national governments.
How Could the AI Industry Be Impacted in the Long Run?
OpenAI, Anthropic and Google may be setting a precedent with their decision to unite against distillation attacks, marking a shift from an open environment to one divided by firmer boundaries and regulations.
Stronger Security Measures
Limited rollouts may just be the beginning of more extensive safeguards, and Anthropic continues to lead on this front. The company recently announced Project Glasswing, an initiative that unites tech leaders like Apple, Nvidia, Google and Amazon Web Services. Because of Claude Mythos’ uncanny knack for “finding and exploiting software vulnerabilities,” Anthropic plans to release the model only to its Glasswing partners, who will experiment with Mythos and share their findings with the rest of the industry.
This approach may become standard protocol as AI models pose more daunting cybersecurity risks and existential threats. Although DeepSeek painted a promising future in which open-source AI could level the playing field, tech leaders seem to be heading in the opposite direction and reducing access in the name of societal safety. It’s a move that could spark debate among everyday consumers and even U.S. policymakers — especially those considering how to balance American tech innovation with national security.
Stricter Regulations
As part of his deregulatory stance on artificial intelligence, President Trump has maintained the AI chip trade with China despite national security concerns among members of his own party. His overall vision has largely been accepted by the federal government up to this point, but that might be about to change as fears over China take hold of Congress.
A trio of U.S. senators has introduced a bipartisan bill to strengthen U.S. export controls and prevent “adversaries” from accessing semiconductor technology through the United States or its partners. If passed, this act would directly contradict Trump’s decision to trade AI chips with China and further cut off Chinese companies from American technology.
If Chinese startups did indeed distill U.S. companies’ AI models, it seems that neither America’s tech leaders nor its policymakers want to do these startups any more favors. As AI becomes a critical component of national security and sovereignty, more regulations could be in the works to ensure U.S. tech companies don’t spill any secrets that could compromise the country’s lead over China and the rest of the world.
Industry collaboration and global innovation might suffer, but the United States has made it clear that these are secondary when it comes to winning the AI race.
Frequently Asked Questions
What is “distillation” in the context of AI?
In the artificial intelligence industry, distillation refers to training a smaller AI model on the outputs of a larger, pre-trained model. The goal is for the smaller model to learn from the larger model’s responses, mirroring its capabilities at a fraction of the cost. This approach allows companies to produce affordable, more compact models that can still compete with those built by major AI companies.
Why are OpenAI, Anthropic and Google working together?
OpenAI first accused Chinese startup DeepSeek of “inappropriately” distilling its GPT models to train DeepSeek-R1. Google and Anthropic would go on to call out adversarial distillation attacks, with the latter naming several Chinese startups as the main perpetrators. In response, the three companies have agreed to team up and share information on how to counter these attacks through the Frontier Model Forum.
How could this partnership change the AI industry?
In addition to forming a partnership with Google, Anthropic and OpenAI are both planning limited releases of their latest products to minimize security risks. Anthropic has also founded the Project Glasswing, narrowing the release of its Mythos model to its Glasswing partners. These decisions reflect a more cautious approach that could become the norm in the AI industry, especially as companies develop more powerful models that rival companies could distill to build potent software and weapons for other nations.
