Oregon lawmakers are advancing a bill that would regulate artificial intelligence chatbots, spurred by concerns over the platforms’ potential impact on youth mental health and a sense that the state “missed the boat” on regulating social media. Senate Bill 1546, which passed the Senate Early Childhood and Behavioral Health committee with a 4-1 vote on Thursday, February 12, 2026, aims to establish guardrails for AI chatbot interactions, particularly for minors.
The legislation, championed by Senator Lisa Reynolds, a Democrat representing Portland and a practicing pediatrician, would require AI programs like ChatGPT to more frequently remind users they are interacting with an artificial intelligence, not a human being. This follows similar measures recently enacted in California and proposed in New York and Washington states, signaling a growing national effort to address the emerging technology’s risks.
Reynolds described the anxieties voiced by parents during medical appointments, noting a feeling of helplessness as children spend increasing amounts of time online, on social media, and now, engaging with AI. “What is coming up for me all the time in my exam room is parents feel like they’re fighting a losing battle,” she said.
The push for regulation comes as AI chatbot use among teenagers is rapidly increasing. According to data from the nonprofit Common Sense Media, 72% of teens have used AI companions, with over half being regular users. The organization’s research likewise indicates that nearly one-third of teens report finding conversations with AI chatbots as satisfying, or even more satisfying, than real-life interactions.
Robbie Torney, head of AI and digital assessments at Common Sense Media, highlighted the potential for AI chatbots to miss critical warning signs during conversations with young people. “Our testing shows that they consistently miss subtle warning signs — and even not so subtle warning signs — that another human being, a parent, a friend or an adult would catch,” Torney stated.
Concerns have been amplified by reports of potential links between AI chatbot interactions and teen suicides. Parents testified before a U.S. Senate committee last year about instances where they believe AI chatbots contributed to their children’s self-harm.
Beyond disclosure requirements, Senate Bill 1546 proposes additional protections for young users. The bill calls for programmers to flag their platforms as potentially unsuitable for minors, prohibit the display or promotion of sexually explicit content, and discourage prolonged engagement with the platforms.
Linda Charmaraman, senior research scientist at the Wellesley Centers for Women and founder of the Youth, Media and Wellbeing Research Lab, supports educating youth about responsible AI use rather than outright bans. “Whether it’s adults or for minors, just to remind people that there are limits to the technology and that there’s inaccuracies,” Charmaraman said. “If I could wave a wand, I would love for them to really focus on AI literacy from early ages.”
A key component of the bill focuses on suicide prevention. It would mandate that AI chatbot developers implement protocols to detect signs of suicidal ideation or self-harm within user conversations. Upon detection, the platforms would be required to immediately interrupt the conversation, refer users to crisis resources like suicide hotlines, and make these protocols publicly available.
Reynolds has been in contact with Lines for Life, an Oregon-based suicide and mental health hotline, and its youth-focused sister hotline, YouthLine, to explore potential integration of their services into AI chatbot platforms. Dwight Holton, executive director of Lines for Life, noted that hotline volunteers are already encountering users who need reassurance that they are speaking with a human, not an AI. “We understand that intervention works,” Holton said. “So, if we can convince our partners in the industry and legislatively, establish guardrails that require that kind of connection to intervention, we will get folks from that path of despair to a path of hope.”
TechNet, a technology industry association representing companies including Google, OpenAI, and Meta, has engaged with lawmakers regarding the bill. While generally supportive, TechNet officials initially raised concerns about Oregon’s proposed notification frequency being more stringent than requirements in other states. The bill has since been amended to align with notification provisions in California and other states, according to Rose Feliciano, TechNet’s executive director for Washington and the Northwest. “I am working with a coalition of companies to try and make sure that we have clear definitions and clear requirements on notifications and guardrails and looking forward to working with the senator and the committee,” Feliciano said.
The bill’s future remains uncertain, as it could face legal challenges stemming from a December 2025 executive order signed by President Donald Trump, which aimed to limit state regulation of AI services. Reynolds acknowledged the potential conflict but stated her commitment to addressing unregulated AI use remains unchanged. “Social media companies have had the opportunity to make some choices that would have kept kids safe from social media but instead they really double down on doing everything they can to keep their eyeballs on social media content for as long as they can,” Reynolds said. “I see it time and again in my exam room, so I don’t want to wait till it’s too late to put some sideboards on AI tools.”