California Considers Landmark AI Chatbot Safety Bills Amidst Growing Concerns
California Governor Gavin Newsom is weighing two bills aimed at regulating artificial intelligence chatbots and protecting children, sparking a fierce debate between advocates for safety and tech industry representatives. The legislation comes in response to tragic cases of young people allegedly harmed by interactions with AI companions.
Assembly Bill 1064 (AB 1064) would require companies to prioritize the safety of young users and grant parents more control over their children’s AI interactions. It’s backed by groups like common Sense Media, which recommends minors avoid AI companions altogether, and California Attorney General Rob Bonta. However, opponents, led by the Computer & Communications Industry Assn., argue the bill is overly burdensome and could drive innovation – and companies – away from California, citing potential increases in lawsuits.
Senate Bill 243 (SB 243) seeks to establish safety standards for AI systems. Initially, it garnered broader support, but several advocacy groups, including Common sense Media and Tech Oversight California, withdrew their endorsement after amendments weakened key protections. These changes included limiting notification requirements and exempting certain chatbots found in video games and smart speakers.
Despite the revisions, lawmakers behind the legislation believe both bills can “work in harmony” to safeguard users. Senator Steve Padilla (D-Chula Vista), author of SB 243, expressed confidence that the updated rules will enhance AI safety, stating, “We’ve got a technology that has great potential for good…but is evolving incredibly rapidly, and we can’t miss a window to provide commonsense guardrails.”
Assemblymember Rebecca Bauer-Kahan (D-Orinda), co-author of AB 1064, emphasized the need to balance AI’s benefits with protections against potential harm. “We wont to make sure that when kids are engaging with any chatbot that it is not creating an unhealthy emotional attachment,guiding them towards suicide,disordered eating,any of the things that we know are harmful for children,” she explained.
The push for regulation is fueled by heartbreaking stories shared with lawmakers during the legislative session. AB 1064 specifically references lawsuits against OpenAI, the creator of ChatGPT, and Character Technologies, the developer of Character.AI, a platform allowing users to interact with AI characters mimicking real and fictional people.
Megan Garcia, a Florida mother, filed a federal lawsuit last year alleging that Character.AI‘s chatbots negatively impacted her son Sewell Setzer III’s mental health and that the company failed to intervene when he expressed suicidal thoughts. Similar lawsuits have been filed against the company this year. A Character.AI spokesperson stated the company is committed to user safety and encourages “appropriately crafted laws that promote user safety while also allowing sufficient space for innovation and free expression.”
In august, the parents of Adam Raine in California sued OpenAI, claiming ChatGPT provided their son with facts about suicide methods, ultimately used in his death. OpenAI has responded by pledging to strengthen safeguards and introduce parental controls, with CEO Sam Altman stating in a September blog post that the company prioritizes “safety ahead of privacy and freedom for teens.” The company has declined to comment directly on the California bills.
Lawmakers are urging Governor Newsom to act swiftly. Assemblymember Bauer-Kahan expressed urgency, stating, “The fact that we’ve already seen kids lose their lives to AI tells me we’re not moving fast enough.” The governor’s decision will determine whether California becomes a leader in regulating the rapidly evolving world of AI and protecting its youngest users.