Most states have adopted some sort of artificial intelligence legislation in the past several years. Targeting everything from deepfakes to governmental use cases, these laws may have once been seen as a sort of stopgap for federal regulation, but that seems increasingly unlikely under President Donald Trump.
Backed by venture capitalist David Sacks, the White House’s AI and cryptocurrency czar, and newfound relationships with AI executives, Trump has not only championed deregulation but threatened states’ ability to pass AI legislation of their own. And several prominent companies have formed their own AI super PACs to help ensure pro-AI candidates are in office, where they can either torpedo or reshape any AI-related bills that get proposed.
Which States Have Passed AI Regulations in 2025?
Thirty-eight states have adopted more than 100 AI laws in the first half of 2025, according to the National Conference of State Legislatures. Two of the most notable state laws are California’s Transparency in Frontier Artificial Intelligence Act (Senate Bill 53) and the Texas Responsible AI Governance Act (HB 149).
In the absence of federal regulations, state legislatures across the country have taken up the cause on their own, becoming ad hoc laboratories for AI policy. It’s not uncommon for states to lead the way in regulating emerging technologies, as they may be the first to feel its harmful effects — like local teens who take their lives after talking to a chatbot, or biased facial recognition systems leading to a wrongful arrest. And it’s possible that this policy work on the state level will lay the groundwork for federal legislation in the future. The California Consumer Privacy Act of 2018 and Illinois’ Biometric Information Privacy Act from 2008, for example, both established technological standards that were later used in policy proposals at the national level.
With their legislative authority hanging in the balance, these states have picked up the federal government’s slack and taken action on critical AI safety issues, such as child exploitation, discrimination and consumer protection. Let’s dive into how some states are forging ahead with AI laws in lieu of federal action.
The Federal Push for AI Deregulation
During his first days back in the White House, President Trump revoked former President Joe Biden’s executive order calling for AI guardrails. Six months later, Trump unveiled his AI Action Plan, a set of policy proposals that aim to repeal “onerous” AI regulations while cracking down on AI models the administration believes to be tainted by ideological bias.
Trump’s AI Action Plan also threatened to withhold funding from states that pass “burdensome AI regulations.” That idea was first proposed in the president’s Big Beautiful Bill, but the Senate cut it from the final version. The AI Action Plan is just a framework, though, and its provisions are merely recommendations for agencies and legislators to consider for further action. For now, states are free to develop AI regulation without fear of federal retaliation.
However, Texas Senator Ted Cruz, who spearheaded the moratorium on state AI regulations as the chair of the Senate Commerce Committee, told Politico in September that the proposal is “not at all dead,” despite it failing on a 99-1 vote in the Senate. In the meantime, he has pursued further deregulatory measures, including the creation of a regulatory “sandbox” that would allow companies the flexibility to test and launch new AI technology without being subjected to federal regulations. AI developers’ requests to modify or waive regulations would be evaluated by the Office of Science and Technology Policy. Cruz stated that his SANDBOX Act would include safeguards to reduce fraud and risks to health and public safety, but offered no further details on what those provisions would entail.
Cruz has warned that a patchwork of inconsistent or overly burdensome regulations could impede the development of AI technology. And if the U.S. lags in AI innovation, he contends, China could quickly overtake it as a global leader in the technology.
“If the United States fails to lead, the values that infuse AI development and deployment will not be American ones, but the values of regimes that use AI to control rather than to liberate,” Cruz said. “If China wins the AI race, the world risks an order built on surveillance and coercion.”
States Take Up the Mantle in AI Policy
Hundreds of AI laws have been passed across the United States over the past several years, with many focused on child exploitation, deepfakes, privacy and how the technology should be used in the public sector.
In 2024, states enacted 113 AI-related laws, according to the Business Software Association. Most notably, Colorado passed a Senate Bill 24-205, which requires developers to “use reasonable care” in protecting consumers from “any known or reasonably foreseeable risks” of algorithmic discrimination in education, employment and government services. The law was originally slated to take effect February 1, 2026, but lawmakers agreed in September 2025 to delay its implementation until June 30, 2026.
States have continued to forge ahead with AI regulations in 2025, even as the Trump administration pushes for deregulation. In the first half of 2025, more than 100 laws were adopted by 38 states, according to the National Conference of State Legislatures. These are some of the most consequential ones to gain momentum since Trump announced his AI Action Plan:
California’s Transparency in Frontier AI Act
Home to 32 of the top 50 AI companies, California is a hub for AI innovation. The state is also becoming a regulatory leader in the industry. In 2024, it required AI companies to document the data used in model training, as well as mark AI-generated content and offer AI detection tools. And it has made even more progress in 2025.
In September, California passed Senate Bill 53, also known as the Transparency in Frontier AI Act, which imposes safety and transparency requirements on the developers of frontier AI models. The law applies only to companies that use large quantities of computing power and earn more than $500 million in revenue, requiring them to disclose how they’ve incorporated national, international or industry best practices into their AI safety framework. They will also need to explain how they assess potential catastrophic risks and how they mitigate those risks. The law creates a mechanism for companies to report safety incidents, too, and provides protections for whistleblowers who report those issues.
California Gov. Gavin Newsom vetoed a previous version of the bill last year, and solicited recommendations from AI experts and academics for the narrower version that he ultimately signed into law. Anthropic endorsed the bill, but OpenAI expressed concern about it duplicating or conflicting with other laws. But after SB 53 was signed, OpenAI said it was pleased that the bill created a “pathway to harmonization” between federal and state government oversight, as it requires ongoing updates to align with national and international law.
Newsom’s office suggested in a press release that the Transparency in Frontier AI Act may even serve as a blueprint for future federal regulation.
“This legislation is particularly important given the failure of the federal government to enact comprehensive, sensible AI policy,” Newsom’s office said. “SB 53 fills this gap and presents a model for the nation to follow.”
New York’s Responsible AI Safety and Education (RAISE) Act
The New York State Legislature passed a bill requiring transparency and safety commitments from frontier model developers in June 2025, but it has not been signed by Gov. Kathy Hochul yet.
The Responsible AI Safety and Education (RAISE) Act targets the industry’s biggest players, requiring companies that have spent more than $100 million on AI training to publish their safety and security protocols. Furthermore, it mandates that these companies assess potential risks and to not release models that create an “unreasonable risk of critical harm.” To ensure accountability, the act requires developers to disclose serious safety incidents, and allows the state Attorney General to seek civil penalties for those who violate the rules.
The RAISE Act has been targeted by AI sector lobbyists, but Assemblyman Alex Bores, a co-author of the bill, has said it “simply ensures that [AI companies] keep the promises” they have voluntarily made in the past.
Texas’ Responsible AI Governance Act
In June 2025, Texas Gov. Greg Abbott signed House Bill 149, also known as the Texas Responsible AI Governance Act, which aims to prevent the harms of AI misuse. Under the law, companies cannot capture or store a person’s biometric data without their consent, and they cannot create AI systems designed to manipulate human behavior, make discriminatory decisions or produce deepfakes of sexually explicit content involving children. Governmental agencies, meanwhile, must disclose to consumers when they are interacting with artificial intelligence.
Violators of the law could be fined up to $100,000 per violation, or they could be held liable for a civil penalty up to $200,000. The law, which takes effect January 2026, also creates a regulatory sandbox overseen by Texas’ newly established Artificial Intelligence Advisory Council.
California’s Chatbot Companion Law
In October 2025, Newsom signed Senate Bill 243, which implements safeguards for companion chatbots, like Character.AI and Replika. Inspired by the suicides of teenage chatbot users, the law requires AI companies to develop a protocol for detecting and responding to comments about suicide or self-harm.
These chatbots must disclose to users that they are talking to an AI and that the platform may not be suitable for minors. Companies must also implement reasonable measures to prevent the chatbot from producing sexually explicit content or encouraging sexually explicit activity. For extended use, the chatbot must remind the child every three hours that they’re interacting with an AI and suggest they take a break. Significantly, the law allows individuals to sue companies if they suffer an injury as a result of them not complying with the law — a provision not often seen in other AI legislation.
Frequently Asked Questions
What is the current stance of the Trump administration on AI regulation?
President Donald Trump revoked former President Joe Biden’s executive order calling for AI guardrails, and his AI Action Plan aims to repeal “onerous” AI regulations while cracking down on AI models believed to be tainted by ideological bias. The plan also recommended withholding funding from states that pass "burdensome AI regulations," but this measure is unpopular with senators and unlikely to result in legislation.
What is the proposed “regulatory sandbox” for AI companies?
Texas Senator Ted Cruz proposed creating a regulatory “sandbox” that would allow companies the flexibility to test and launch new AI technology without being subjected to certain federal regulations. Requests to modify or waive regulations would be evaluated by the Office of Science and Technology Policy. The SANDBOX Act is still in the very early stages of the legislative process, and it has not been passed by the House or Senate yet.
