What You Can Learn From Anthropic’s No Good, Very Bad Week

Anthropic faced three major incidents in late March, all of which illustrate the consequences of neglecting the infrastructure underlying AI systems.

Written by Ilman Shazhaev
Published on Apr. 07, 2026
A hand holds a smartphone with the Anthropic logo in front of a screen advertising the company's Claude tool.
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Apr 06, 2026
Summary: Anthropic faced three major incidents in late March: a 3,000-file data leak, the accidental public release of Claude Code’s full orchestration logic, and a massive copyright takedown error. These events highlight a dangerous gap between rapid AI scaling and weak operational infrastructure.

Anthropic had a rough week at the end of March. First, a misconfigured content management system left nearly 3,000 internal files publicly accessible. Five days later, a routine software update accidentally bundled the full source code of Claude Code into a public package. Within hours, developers had mirrored roughly 512,000 lines of orchestration logic, memory systems and workflow architecture.

There was also an unrelated hiccup with a copyright takedown request that reached more than 8,000 repositories instead of 96, which the company later corrected.

Anthropic is one of the most capable and well-resourced AI labs in the world, which is exactly what makes three operational incidents in a single week worth paying attention to. If this kind of problem can happen there, the rest of the industry should be asking harder questions about their own foundations.

Why the Anthropic Leaks Matter

The recent exposure of Claude Code’s orchestration layer marks a shift in the AI industry. While model quality (benchmarks and reasoning) is often the focus, Anthropic’s leak revealed that the real competitive advantage lies in the workflow logic, memory management and tool integration that make a model operationally viable. With this playbook now public, the industry must prioritize hardening the infrastructure and release pipelines to protect the logic that actually drives commercial AI.

More From Ilman ShazhaevWhy Did the Pentagon Turn on Anthropic?

 

AIs Competitive Edge Sits Above the Model

Most of the AI industry still talks about competition in terms of model quality: better benchmarks, bigger context windows, smarter reasoning. That framing is not wrong, but it is increasingly incomplete.

The Claude Code leak made that visible. What reached the public was not the model itself but the full orchestration layer around it. That layer, the workflow logic, memory management, tooling integrations and agent coordination, is where a foundation model becomes operationally viable.

What makes this particularly unfortunate is that Anthropic understood this better than most. Their published guide on building effective agents explicitly argues that composable orchestration and tool design carry more weight than model performance alone. They wrote the playbook on why this layer matters, and then found that layer unexpectedly exposed.

That framework is now public in a way they never intended. Every company building agentic AI products now has a detailed reference for how the market leader structures its harness, manages memory and coordinates tool access.

The model underneath is still important, but the decisions about how to orchestrate it, manage context and coordinate tools are what make an AI product commercially viable.With that logic now public, Anthropic’s competitors can study and replicate years of engineering decisions that previously gave Claude Code a meaningful lead in the market.

More on Anthropic + ClaudeWhy Are Millions of Users Leaving ChatGPT for Claude?

 

The Next Wave of AI Must Be Built Differently

The deeper problem is that the industry was never set up to protect the orchestration layer in the first place. The first phase of AI building was defined by speed, scale and capital. Train the biggest model, raise the most money, ship the fastest. That playbook delivered capability, but it left the operational foundation dangerously thin.

I’ve spent years building and scaling infrastructure across industries, and what I see happening in AI is a pattern I have seen before. Capability is growing faster than the operational foundation underneath it. 

These systems now execute code, access file systems and connect to third-party services with real privileges inside production environments. And the infrastructure supporting them has not kept up.

The Anthropic leak started with a packaging error in a routine update. That same week, the widely used Axios npm package was compromised through an account takeover, and a supply chain attack on LiteLLM, an AI infrastructure library with millions of daily downloads, led to the exposure of training data from multiple major AI labs. 

Three separate incidents in one week, all rooted in the same gap between how fast AI infrastructure is scaling and how little attention goes into hardening the systems that deliver it.

There is a better way to build, and I’ve started calling it Fortress Compute. The premise comes from the same lessons I learned building physical infrastructure. Design everything with the assumption that components will fail and architect the system to absorb them when they do. Physically hardened, architecturally redundant, with clear boundaries between what must stay sovereign and what can tolerate exposure during disruption.

For an AI company, that means treating your orchestration logic with the same access controls you apply to model weights. It means release pipelines that require multiple sign-offs before anything touches a public registry, workload classification that separates sensitive IP from general infrastructure and containment architecture that prevents one failure from cascading across the system. 

The Anthropic leak spread as far as it did because none of these boundaries were in place.

The economics of running AI at scale make this unavoidable. Inference costs and centralized GPU dependency are the real constraints on scaling, and fundraising does not change the underlying math. Infrastructure cycles always correct eventually, and the operational foundation you have at that point is all that matters.

The AI industry has been given a very public preview of what that correction looks like. It would be a waste not to build accordingly.

Explore Job Matches.