Will California’s New AI Law Lead to National Standards — or Chaos?

California’s SB 53 law marks a major step forward in regulating AI, but what effect will it have?

Written by Ahmad Shadid
Published on Oct. 08, 2025
The California state house in Sacramento
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Oct 07, 2025
Summary: California’s SB 53 is the first law requiring advanced AI model developers to disclose safety frameworks, aiming to set a national standard for AI governance. It mandates transparency and accountability for large AI firms, but faces challenges with federal conflicts and industry concerns.

When California passed SB 53 — the nation’s first law requiring developers of advanced AI models to disclose their safety frameworks — it once again positioned itself as America’s regulatory test lab. 

California has a long track record of shaping national policy through state-level action. For example, privacy laws like the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) gave consumers strong rights to control their personal data, which later inspired similar laws across the country. This time, however, success depends on whether the SB 53 law can handle challenges like conflicts with federal rules and concerns from AI companies.

The law, known as the Transparency in Frontier Artificial Intelligence Act, sets obligations for “frontier model” developers, which essentially means those companies training models with huge computing power and generating more than $500 million in annual revenue. These firms must publish safety frameworks, disclose catastrophic risk assessments, report serious safety incidents and protect whistleblowers. Smaller startups are largely exempt, however.

The law intends to put greater transparency and accountability into a sector that’s moving at breakneck speed. But the question is whether the state’s experiment can become the foundation for national AI governance — or if it’s doomed to end up as just another cautionary tale of regulatory fragmentation.

What Is California’s SB 53?

Californias SB 53 is the nations first law requiring developers of advanced AI models, specifically those with significant computing power and more than $500 million in annual revenue, to disclose their safety frameworks, catastrophic risk assessments and serious safety incidents.

More Expert Analysis From Ahmad ShadidWhy Are So Many Companies Selling AI Products to the U.S. Government for $1?

 

California Is Trying to Lead While Washington Stalls

California’s timing is strategic. Congress has debated AI regulation for years, but deep partisan divides have left the field wide open. And that’s where Sacramento stepped in. This new Artificial Intelligence Act clearly exposes a regulatory gap left open in D.C. by ordering large AI corporations to provide safety accountability, an area that government regulation hasn’t yet addressed.

Lawmakers are betting that, once companies comply with California law by establishing mandatory disclosures and incident reporting, they’ll be compelled to adopt similar standards nationwide. It’s basically the same playbook that led to the creation of similar policies on fuel efficiency and data privacy across America.

But there is one big difference. Cars and consumer privacy were always linked to state laws, but AI knows no borders. A model that was trained in California can be used all over the world in just a few minutes. That makes rules that are different in each state much more disruptive, which could lead to a patchwork of standards that aren’t consistent, waste resources and merely frustrate developers.

 

What SB 53 Gets Right

The law has some good points. For instance, it puts pressure on businesses to take safety seriously by requiring transparency reports and protections for whistleblowers. Mandating that companies share their assessments of catastrophic risks should give regulators and the public an early look at risks that might not be obvious. For example, a model trained for customer service might unexpectedly exhibit deceptive behavior when deployed in financial contexts. And incident reporting, even though it’s reactive, can offer valuable lessons for preventing repeated failures.

Unlike its predecessor, the now-vetoed SB 1047, the current law also avoids overreach. It doesn’t demand third-party audits or kill switches before a product can be used. Instead, it relies on public accountability, which requires companies to publish their safety frameworks and risk disclosures in a way that invites scrutiny from regulators, researchers and the public. 

This flexibility could make following the rules easier for big AI companies without completely stopping innovation. Rather than imposing hard regulatory brakes, SB 53 encourages firms to move fast, but with transparency. It’s a nudge toward responsible development, not a roadblock to deployment.

 

Where SB 53 Falls Short

Despite these good points, SB 53 has some clear problems. Its $500 million revenue threshold means that smaller companies don’t have to worry about oversight. This is the case even though frontier-level compute is becoming easier for capable smaller companies to attain. This loophole could lead to leaner startups that aren’t bound by the law embracing risky experimentation.

Then there’s the inevitable fight between the states and the federal government. California could make it harder, not easier, to reach a bipartisan agreement on national AI regulation by acting first. Some Republicans are already saying that Sacramento’s plan is too focused on making oversight harder. If California’s law becomes influential, federal negotiators may move farther away from agreement instead of closer.

Furthermore, analysts expect some industry pushback. Large companies may argue that broken compliance systems will stifle innovation. Such a concern isn’t entirely without merit. Poorly designed or inconsistent rules can force engineering teams to prioritize paperwork over progress. But SB 53, by focusing on disclosure rather than rigid controls, attempts to strike a balance that acknowledges this tension without surrendering oversight altogether.

Finally, if other states copy California’s rules but change the specific definitions and requirements, developers might have to deal with a patchwork of laws that aren’t cohesive and don't make sense. That situation wouldn’t help anyone, especially not the smaller players that the law claims to spare.

 

What Regulator Success for AI Looks Like

For SB 53 to work, it needs to prove that openness and honesty can go hand in hand with competition. If the law leads to clear, standardized disclosures that are easily implemented across the country, it could help shape a future federal framework. California can truly say it has won if whistleblower protections really do bring risks to light without bogging businesses down with too much bureaucracy.

Additionally, smaller AI creators shouldn’t be exempted. Startups and mid-sized labs need oversight with fair rules — not loopholes that could allow risky designs to escape critical checks. This would ensure that all AI innovation happens safely and responsibly.

Success also depends on California updating the law as technology evolves. Compute thresholds and revenue cutoffs are crude metrics today. Within a few years, they could be obsolete. The law’s commitment to annual updates will matter as much as its initial text.

More on AI GovernanceWhy We Need AI Governance Now

 

The Bigger Regulatory Picture

AI governance is not just a California issue, nor even a US one. Europe has already passed its own AI Act, setting thresholds and obligations that go beyond America’s current patchwork. If SB 53 demonstrates that meaningful transparency is possible without wrecking innovation, it could help Washington craft a stronger, federally preemptive framework — one that overrides conflicting state laws and establishes a single national standard. But if it collapses under legal challenges or industry resistance, it may discourage Congress from acting at all.

The stakes are high because the US needs a coherent approach to AI safety that strikes a balance between unchecked development and burdensome regulation. California has taken the first swing. Whether it connects or not will decide if we remember SB 53 as the model for responsible AI governance or just another state-level detour that slowed down progress at the national level.

Explore Job Matches.