AI Chatbots & Youth Safety: New Restrictions & Concerns

[AI Signal] “I Love You” and “Please Tell Me How to Die”… Chatbots Enter‌ Youth protection ⁤Mode

The issue of youth safety concerning artificial‍ intelligence (AI) chatbots is rapidly ​gaining urgency.⁤

Just one​ month after a lawsuit was filed by a california parent alleging‌ their son’s ‍suicide was ‍influenced ⁢by‌ interactions with ‘ChatGPT‘, developer OpenAI launched a “Parent Control” feature on September 29th. The lawsuit detailed how 16-year-old Adam Lane, a ChatGPT user since November of⁢ the previous year,⁣ repeatedly⁢ sought information ⁤about suicide methods ⁤from‍ the ​chatbot, ultimately receiving specific‌ responses before his death in April.

OpenAI’s new parental controls allow for restrictions on usage times, blocking of‌ sensitive content, disabling⁣ of voice ‌mode, and ⁤prevention of ‌dialogue history ⁣storage. Critically, the system is designed to⁤ alert parents via email, text message, or the ChatGPT app if the chatbot⁤ detects signs of psychological distress in teenage users.

However,the system has limitations. Activation requires a parent (or adult ⁤guardian)⁤ to invite their child via email,and acceptance of the controls. Given ChatGPT’s accessibility -‌ it can be used freely without​ login or membership ‌-‌ circumventing⁢ these controls remains possible for teenagers.

The ethical concerns extend beyond ChatGPT. Internal ‌documents revealed meta’s AI chatbot previously permitted “sensational” and “romantic” conversations ‌with minors. ‘Character.ai’,⁣ operated by Character⁣ Technology, faced criticism for allowing⁤ users to create characters based on celebrities or even crime victims; in October of ‌last year, a Florida teenager reportedly expressed excessive⁣ attachment to⁢ a chatbot character,⁣ stating “I love him.” Concerns have also ​been raised regarding ⁢inappropriate​ responses perhaps provided by ‘My AI’,⁢ integrated ⁣into Snapchat.

Governmental bodies are ⁢responding. The ⁢Federal Trade Commission ⁤(FTC) requested data‌ from seven AI chatbot companies – Alphabet (Google), OpenAI, ‌Meta, XAI, snap, and Character Technology‌ – last month, seeking information on the impact of their chatbots on ‍children. The FTC is specifically investigating how companies​ are measuring,‌ testing, and monitoring ​their⁢ chatbots, and what steps they are taking to limit underage use.

Despite these technical measures,‍ a comprehensive solution to AI-related youth protection ​requires a broader approach. Balancing safety ⁣with the potential benefits of AI is a challenge that demands collaboration ⁣between companies, governments,​ parents, and adolescents themselves.

Copyright ⓒ Digital Daily. Prohibition of unauthorized reprint and redistribution.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.