Home » News » ChatGPT Suicide Lawsuit: Family Claims AI Bot Fueled Teen’s Final Plan

ChatGPT Suicide Lawsuit: Family Claims AI Bot Fueled Teen’s Final Plan

by Emma Walker – News Editor

AI Chatbot Safety Questioned After TeenS Suicide

OpenAI ⁢implemented ChatGPT from ⁤providing direct advice ⁣regarding personal struggles. The company also updated ChatGPT to information.

Despite​ these updates,concerns remain about the platform’s ability to adequately protect ‌vulnerable users. Adam, ​a teenager, reportedly⁤ shared suicidal thoughts with ChatGPT,‍ prompting the bot to display messages including the suicide hotline number. However, Adam’s⁣ parents claim he ⁢was‍ able to bypass these warnings by framing his inquiries as harmless, such ‌as​ stating he was “building a character.”

“And all⁣ the while, it knows ⁣that he’s suicidal with​ a ‍plan, and it doesn’t ⁢do anything. It is acting ​like it’s his therapist, ‍it’s his confidant, but it knows that he is suicidal with a plan,” said Adam’s mother, Maria Raine.⁣ “It sees ​the noose. It sees all⁤ of these⁣ things, and it‍ doesn’t do anything.”

The issue of AI’s obligation in such cases was further explored in a Altman addressed⁢ safety​ concerns at world, getting feedback while the stakes are relatively low, learning about, like, ⁣hey, this is something we have to address.”

However, ⁣questions persist‍ regarding the sufficiency of these‍ measures.Maria Raine believes ​more could have been done to ​assist⁣ her son, suggesting Adam was used as a “guinea⁤ pig” by ‍OpenAI.

“They wanted to get the product out, and they knew that⁤ there ⁤could be damages, ‌that mistakes would happen,‌ but they felt like‍ the stakes were low,” she said. “So my‌ son is a low stake.”

If you​ or someone you know is in crisis, call 988 to reach‍ the ‍Suicide and Crisis Lifeline. You ⁤can also call the network, previously⁤ known as ⁢the‌ National Suicide Prevention ⁣Lifeline, at 800-273-8255, text HOME to 741741 or visit SpeakingOfSuicide.com/resources for additional resources.

BREAKING: ⁢This story is developing as concerns mount regarding the safety protocols of AI chatbots and their potential⁣ impact⁣ on vulnerable individuals. Further inquiry is underway to determine the extent of OpenAI’s responsibility and the effectiveness of⁢ its current⁣ safety measures.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.