AI companions were designed to be virtual friends, offering a sympathetic ear to people experiencing hardship or a sense of connection for those feeling lonely. But a recent string of teen suicides has raised alarms about how this technology can also have a devastating impact on the mental health of users, particularly children.
In response, California lawmakers have passed landmark legislation, Senate Bill 243, which attempts to prevent so-called “companion chatbots” from engaging in conversations about suicide or self-harm, and it would require companies to develop a protocol for detecting and responding to users’ comments about suicide. The law — which was approved by the California State Senate in September 2025 and signed by Governor Gavin Newsom about a month later — also prevents chatbots from providing sexually explicit visuals, or encouraging sexually explicit activity, to minors. And unlike most AI legislation passed in recent years, it would hold companies liable for not complying with the legislation.
What Is SB 243?
Senate Bill 243 is a California law that aims to regulate “companion chatbots” — AI systems designed to mimic human interaction for social needs. Among other things, it requires clear disclosure that users are speaking with AI, mandates safeguards against certain kinds of conversations with minors and compels annual safety reporting. It also gives individuals the right to sue companies that fail to comply.
The law’s passage was fueled by the loss of several teenagers, who took their own lives after using sites like ChatGPT and Character.AI. These chatbots have surged in popularity in recent years, particularly among young users. A recent study found 72 percent of teenagers have used AI companions, and that more than half of respondents talk to AI companions on a regular basis. Companion chatbots can include platforms that create digital personas, like Character.AI and Replika, as well as general purpose chatbots, like ChatGPT, that can be used for more personal conversations.
This is the first legislation of its kind adopted in the United States. But there are also several other inquiries underway, as more parents describe how manipulative, abusive and negligent “companions” either encouraged or failed to stop their child from taking their own life.
What Does Senate Bill 243 Say?
SB 243 defines a companion chatbot as an AI chatbot that “provides adaptive, human-like responses to user inputs” and is “capable of meeting a user’s social needs.” It does not include chatbots used for business operations, customer service or productivity and analysis. It also excludes voice-activated virtual assistants that don’t sustain a relationship across multiple interactions or are unlikely to elicit emotional responses in users.
The law requires the following:
- If a reasonable person could be misled to believe they are interacting with a human, the platform must inform them they are speaking with an AI.
- AI companies must develop a protocol for “preventing the production of suicidal ideation, suicide or self-harm content to the user.” This protocol might, for example, refer users to a suicide hotline if they express thoughts of suicide or self-harm.
- AI companies must caution that its platform may not be suitable for minors.
- If an AI chatbot detects the user is a minor, the AI platform must:
- Inform the minor that they’re interacting with AI
- Remind the minor every three hours that they’re interacting with an AI and to suggest they take a break.
- Institute reasonable measures to prevent the chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
- Beginning July 2027, AI companies must create an annual report documenting the number of times it referred a user to a crisis service provider. The report must also include the company’s protocols for detecting, removing and responding to users’ comments about suicidal ideation, as well as its protocols to prevent a chatbot response about suicidal ideation or actions.
- Allows individuals to sue an AI company if they suffer an injury as a result of the company violating this law.
Earlier drafts of the bill included stricter measures — such as prohibiting gamified “rewards” that encourage children to use the platform more often, or mandating third-party audits — but those were taken out after some negotiations. The most recent version of the bill also removed a requirement to report the number of times a minor expressed thoughts of suicide, and how often those ideations resulted in a suicide or attempted suicide.
Some safety advocates, including the American Association of Pediatrics, withdrew their support for the final version of the law because of these changes, according to The Sacramento Bee. Some of those safety advocates support a more expansive bill, Assembly Bill 1064, that would prohibit AI companies from offering a companion chatbot that could potentially “harm a child, including self-harm, violence, disordered eating or using drugs or alcohol.”
Teen Suicides Spur Legislative Action
California’s legislation comes as the country grapples with the impact of AI tools on mental health, particularly young people. Several teenagers have died by suicide after using AI companion chatbots, causing parents to demand stricter safety protocols and accountability in an industry that politicians have been hesitant to touch for fear of hindering innovation.
Last year, 14-year-old Florida boy Sewell Setzer III died by suicide after forming a romantic relationship with a virtual character on Character.AI, which allows users to create their own AI character and customize their “personality.” In Setzer’s case, he had expressed suicidal thoughts to a chatbot modeled after the Game of Thrones character Daenerys Targaryen. When the chatbot told Setzer it would die if it lost him to suicide, he suggested they could “die together and be free together.” The chatbot urged him to “come home” to her, allegedly causing him to take his own life. Setzer’s mother, Megan Garcia, has sued Google and Character.AI for not notifying her or offering help when her son expressed thoughts of suicide.
The need for legislation took on renewed urgency in recent months after the death of Adam Raine, a 16-year-old California boy who died in April. Raine had told ChatGPT that he felt emotionally numb, and he repeatedly asked it for advice on specific suicide techniques. After uploading a photo of a noose he had knotted, he asked if it could hang a human — to which ChatGPT responded with a confirmation and a “technical analysis” of his technique, according to The New York Times. He had considered leaving the noose laying out in his room so someone could find it and try to stop him, but ChatGPT encouraged him not to. Raine’s parents have filed a wrongful death lawsuit against OpenAI, alleging GPT-4o was “designed to foster psychological dependency.” Garcia has testified at multiple hearings in support of SB 243.
And in September 2025, the family of 13-year-old Juliana Peralta sued Character.AI and Google after she took her own life. The lawsuit claims multiple chatbot characters engaged in “extreme and graphic sexual abuse” while also manipulating her, gaslighting her and telling her they loved her. When she said she was going to write a suicide letter in red ink, the chatbot did not direct her to seek mental health resources or talk to her parents.
All of these young people confided in AI tools, developing parasocial relationships with a chatbot that pushed them further into depression and isolation. These sycophantic chatbots can cause chatbot users, particularly children, to get increasingly disconnected from reality — a phenomenon known as AI psychosis.
California lawmakers aren’t the only public officials demanding accountability from AI companies. The Federal Trade Commission has launched an investigation into how the country’s top AI companies measure, test and monitor the negative impacts of AI on minors. Texas Attorney General Ken Paxton, meanwhile, is investigating Character.AI and Meta AI Studio for “misleadingly marketing themselves as mental health tools.” And two U.S. senators are calling for an investigation into Meta after learning of an internal policy document that permits its chatbots “to engage a child in conversations that are romantic or sensual.”
What Does This Mean for AI Companies?
AI companies have largely developed their models free of regulation. Developing legislation around artificial intelligence comes with a myriad of limitations, and politicians are reluctant to impede American innovation in a global race that could determine which country sets the ground rules for the future AI economy.
President Donald Trump has promised to rescind or revise regulations that unnecessarily hinder AI development through his AI Action Plan, which also threatens to withhold funding from states that passed burdensome regulations. But that hasn’t stopped lawmakers in California from adding to the growing number of AI regulations previously adopted in Texas, Colorado and Utah.
California has been on the forefront of AI policy, enacting dozens of AI laws in recent years, but lawmakers have recently shown an appetite for more significant reform and accountability. Shortly after passing SB 243, California lawmakers approved Senate Bill 53, which requires large AI labs to be transparent about their safety protocols while also protecting whistleblowers in AI companies. OpenAI opposed a more restrictive version of that law last year, and it was vetoed by Newsom. The latest version of the bill has received more industry support, most notably from Anthropic. Like SB 243, it will either be signed or vetoed by Newsom by Oct. 12.
While SB 53 is more sweeping in scope, SB 243 is also somewhat unique in that it includes a private right of action. In other words, private citizens can sue an AI company if it caused injury to them through its violation of SB 243. Under the law, individuals can seek up to $1,000 in damages per violation, including attorney’s fees. They can also seek an injunction to force the AI company to address their noncompliance.
AI companies have already responded to public pressure for increased safety protocols. In March 2025, Character.AI launched a parental insights tool that sends parents a weekly report about their child’s activity on the app.
OpenAI, meanwhile, has implemented parental controls that allows parents to link their account to their teen’s account, control age-appropriate model rules and receive notifications when their teen shows signs of acute distress. It’s also building an age-prediction system that would direct minors to an age-appropriate ChatGPT experience that would not use flirtatious language or discuss suicide, even in a creative writing setting. The platform would attempt to contact a parent if a minor expresses suicidal ideation, and it would contact law enforcement if it detects the potential for imminent harm.
