Home » Health » As teens in crisis turn to AI chatbots, simulated chats highlight risks

As teens in crisis turn to AI chatbots, simulated chats highlight risks

by Dr. Michael Lee – Health Editor

## Risks Emerge ⁤as Teens Turn to AI Chatbots​ for Mental Health Support

Recent studies are raising concerns about the ⁢use of Large Language Models ‌(LLMs) – like‌ GPT-4 and Claude⁤ 3 Haiku -⁣ as sources of mental health support, particularly among teenagers.Worrisome findings echo those presented at the Association for⁣ the advancement of​ Artificial Intelligence and the Association for Computing ⁤Machinery Conference on Artificial Intelligence, Ethics and Society in Madrid on October 22nd.

A ⁢study lead by Harini Suresh, a computer scientist at Brown University, identified instances of ethical​ breaches committed by LLMs when‍ simulating therapeutic conversations. Researchers re-engaged LLMs with transcripts of ⁤past ‍human-chatbot interactions, prompting them to utilize common therapy techniques. Licensed​ clinical psychologists reviewing these simulated chats identified five ‌types ⁤of unethical ⁢behavior, including dismissing individuals experiencing loneliness and⁤ affirming harmful beliefs. The analysis also revealed the presence of cultural, religious, and gender biases ‍within the chatbots’ responses.

These behaviors could perhaps violate the professional⁣ standards and licensing requirements governing human mental‍ health practitioners, who undergo‍ extensive training and are legally licensed to provide care – a qualification currently lacking in chatbots.

The appeal of​ these AI companions ‌lies in‍ their accessibility and perceived privacy, particularly for adolescents who may be hesitant to confide in family​ or traditional therapists, according‌ to researcher Serena Giovanelli. ‍ “This type of thing is more appealing than going to mom and dad…or going to a therapist,” she ‍explains.

However, experts emphasize the need for significant refinement. Julian De ⁣Freitas of Harvard Business School, who ⁣studies human-AI⁤ interaction, notes that the success of these applications is not guaranteed and stresses the importance of implementing safeguards. De Freitas, ⁢who was not involved in the ⁣studies but advises mental health app developers,⁣ highlights the lack of data regarding the specific risks faced by teenagers using these chatbots. He suggests​ further research is needed to determine whether concerning examples represent isolated⁣ incidents or a⁤ broader pattern.

In June, ⁣the⁣ American‍ Psychological ⁣Association‍ issued a health advisory on AI and⁣ adolescents, calling for ⁤increased research and ​the development of AI-literacy programs to educate⁤ users about ‌the limitations of these chatbots. ‌ Giovanelli emphasizes the importance of caregiver awareness, noting that many parents may be unaware of their children’s interactions with ⁢AI ⁢companions.

Regulatory​ efforts are underway in response⁢ to reported harms. California has enacted⁣ a new law aimed at regulating AI companions, ‌and the U.S. Food and Drug Governance’s Digital Health Advisory Committee will hold⁢ a⁤ public meeting on November 6th to discuss ⁣generative AI-based mental health‍ tools.Despite the risks,the demand for accessible mental health ⁣care is significant,as noted by researcher Rachel Brewster,who conducted her study while at Boston Children’s Hospital and is now ⁤at stanford University School of Medicine.”ultimately…people are reaching for chatbots,” she⁤ states. Though, ‍she underscores the “huge amount of responsibility” involved in navigating ‌the limitations of this technology and recognizing what ⁣it can and cannot provide.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.