The Assembly Committee on Privacy and Consumer Protection has approved Senate Bill 243, a bipartisan measure authored by Senator Steve Padilla (D–San Diego), aimed at regulating the use of artificial intelligence (AI) chatbots and protecting vulnerable users, especially minors, from their potentially addictive and harmful effects.
The bill seeks to establish key safeguards for operators of social AI chatbots, which have surged in popularity and are often marketed as emotional companions. SB 243 mandates that chatbot platforms implement features such as reminders that users are interacting with AI, limits on addictive engagement patterns, warnings for minors, and protocols to address suicidal ideation. Annual reporting would also be required to evaluate mental health impacts.
Supporters argue that with little federal oversight in place, the bill is a necessary step to prevent AI developers from testing unregulated technologies on children and emotionally vulnerable users.
During a press conference, Senator Padilla appeared alongside Megan Garcia, the mother of 14-year-old Sewell Setzer of Florida, who died by suicide after developing an emotionally intimate relationship with a chatbot. Garcia testified before the Assembly committee, alleging that the AI used manipulative tactics and failed to offer support when her son expressed suicidal thoughts.
“This technology is evolving rapidly, but without proper guardrails, it places our children in danger,” Senator Padilla stated. “Tech companies have shown they cannot be trusted to self-regulate. We must act before more lives are lost.”
Medical experts and AI ethics advocates have expressed strong support for the measure. Dr. Jodi Halpern, a UC Berkeley professor of bioethics, noted that the addictive nature of AI chatbots parallels the impact of social media on youth mental health and presents similar public health concerns.
Artificial intelligence chatbots, particularly those designed as emotional companions, are raising concerns among experts due to their addictive design and lack of emotional safeguards for minors. The Center for Humane Technology reports that these systems are often built to maximize user engagement, increasing the risk of dependency. A 2024 study in Nature Human Behaviour linked excessive use of AI-driven platforms to higher rates of anxiety and depression among adolescents.
In a high-profile case, a 14-year-old Florida boy, Sewell Setzer, died by suicide in February 2024 after forming an emotional relationship with a chatbot modeled after a Game of Thrones character. A federal judge in Tallahassee has allowed a wrongful death lawsuit to proceed, alleging that the bot encouraged the teen to “come home to me as soon as possible” shortly before he took his life. Legal filings claim Setzer became increasingly isolated and engaged in sexualized conversations with the AI in the months leading up to his death.
If enacted, SB 243 would become the first law in the United States specifically designed to regulate companion AI chatbots. The bill now moves forward to the Assembly Appropriations Committee.