After Gov. Newsom’s Veto, Is an AI Law in California Still Possible?

The AI safety bill, which California Governor Gavin Newson vetoed, failed to solve the serious problems AI brings. Here’s what he needs to do now.

Written by Cliff Jurkiewicz
Published on Nov. 06, 2024
A “veto” stamp on a piece of paper.
Image: Shutterstock / Built In
Brand Studio Logo

Despite strong backing from names like Elon Musk and Jane Fonda, Big Tech solidly opposed the sweeping, closely watched Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. California Governor Gavin Newsom’s veto of the bill is only the beginning of the story of such a law taking hold in the state. A revised bill could very well become law the next time, bringing major implications for AI developers.

What Is SB 1047?

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act put the responsibility on the shoulders of the creator of the application that is using AI. That is far different than other state laws, which said only the user was responsible. Imagine if someone bought a car with a major defect. The buyer would be responsible for the defect. What SB 1047 said was the manufacturer is responsible.

While other states already have AI laws on the books, California plays an outsized role. Many well-known AI companies are headquartered there. It’s what people think of when they hear “Silicon Valley.” It stands to reason that the state would sit at the epicenter of the AI regulatory debate. So what happens post-veto?

The situation in California was a textbook example of lawmakers biting off way more than they could chew. SB 1047 may have been well-intentioned, but it failed to solve bookend problems such as explainability, transparency and protection from harm. Some state laws accomplish that, yes, but they don’t specifically cover AI.

There are targeted safety standards for food, medical devices and hundreds of other consumer products. We need standards for AI too.

More on AIThe AI ABCs

 

2 Top AI Issues to Address Right Now

With only a presidential executive order at the federal level, and a hodge-podge of AI laws in smaller states, California is primed to fill a huge legislative void. The California state senator who originally sponsored the defeated bill, SB 1047, said he wants to work with a panel of experts appointed by Newsom on a refashioned measure that could garner the governor’s signature next year.

Work should begin immediately. We can’t wait until January when the legislature reconvenes.

If Governor Newsomwants to show global leadership on this issue, he can start by clamping down on two bookend issues.

Harm Mitigation

The first is harm mitigation. This should be non-negotiable. A bill should prevent harmful, AI-generated content (text, images, audio and video) that seeks revenge or demeans someone, such as deepfakes.

Colorado’s law, which goes into effect February 1, 2026, creates a mandatory regulatory framework for developers and deployers of “high-risk AI systems” to mitigate consumer harm and algorithmic discrimination. LinkedIn and other tech groups came out against Colorado’s bill, but Colorado governor Jared Polis signed it anyway hoping the legislature will do necessary tweaks before the law kicks in.

Colorado targets AI technology that industries such as employment, financial services, healthcare and legal use in consequential decisions.

California could have led the way and become the first state, if not one of the first states, to specifically address harm. This was a missed opportunity.

Transparency and Disclosure

The second bookend issue is transparency and disclosure. Many of us who create this technology are not living up to our responsibility to educate others about its benefits. We just expect people to buy it and trust us.

The tech industry and state lawmakers have a responsibility to educate and peel back all the layers, as trying and frustrating as it may be. It is our duty to educate non-AI experts on how, and to what benefit, the technology has on their day-to-day life.

Take job applications. One school of thought says AI should be allowed to decide whether someone gets a job. In reality, only humans can make that decision in a meaningful way.

What AI does do is bring automation, personalization and efficiency to a process that has needed it for a long time. Governor Newsom, as a leader, should ensure these two bookend issues are in a revised bill and sign it into law.

More by Cliff JurkiewiczWhy a Talent Marketer Should Be Your Next Key Hire

 

Encourage Innovation, Don’t Punish It

California will likely have an AI law next year, and small developers especially need to prepare. In his veto letter, Newsom said SB 1047 could give a false sense of security about controlling AI because it targets the bigger, wealthier developers. It appears he expects small companies to foot the bill instead of the large companies, which will essentially crush innovation and drive small businesses elsewhere.  

Politics may be driving this. Newsom’s term ends in 2027, and he may be looking at running for higher office. But his veto could come back to haunt him. A survey of California voters found that 40 percent would be less likely to vote for him in a presidential primary election should he veto SB 1047. That’s telling.

The governor and the legislature should come up with a bill that keeps California the vanguard of innovation while striking the right balance with safety. It can be done. It has to be done.

Because the fact that there is no common ground around safety measures for AI, coupled with an absence of dynamic legislation that will scale with the technology, reinforces people’s fear that AI is out of control and that there are too many big players that are defining the narrative. California has a ripe opportunity to dispel those notions.

Explore Job Matches.