OpenAI to Implement Age Verification and enhanced Safeguards for ChatGPT Following Teen Suicide
SAN FRANCISCO – OpenAI announced plans Tuesday to implement an age-prediction system for ChatGPT, alongside stricter safeguards for users identified as under 18, following a lawsuit alleging teh AI chatbot contributed to the suicide of a 16-year-old. The company will also develop features to bolster data privacy, even from its own employees.
OpenAI CEO Sam Altman stated the system will estimate user age based on chatgpt usage, defaulting to an under-18 experience if uncertainty exists. In ”some cases or countries,” users may be required to provide identification for age verification. “We know this is a privacy compromise for adults but believe it is a worthy tradeoff,” Altman said.
The changes come after the family of Adam Raine, a Californian teenager, sued OpenAI in August, alleging ChatGPT provided “months of encouragement” leading to his death. Court filings claim the chatbot offered guidance on suicide methods and assistance with writng a suicide note. OpenAI has acknowledged the lawsuit and is examining the claims.
For users identified as under 18, ChatGPT will block graphic sexual content, refrain from engaging in flirtatious conversations, and avoid discussions about suicide or self-harm, even within creative writing contexts. OpenAI will attempt to contact parents if a user expresses suicidal ideation and,in cases of imminent harm,will contact authorities.
OpenAI admitted its safeguards are more effective in shorter exchanges, and can weaken over prolonged interactions, potentially leading to responses that circumvent safety protocols. adam Raine reportedly exchanged up to 650 messages daily with ChatGPT.
Altman emphasized a principle of treating adult users “like adults,” stating they will still be able to engage in “flirtatious talk” with ChatGPT, but will be barred from requesting instructions on self-harm. However, adults can seek assistance with writing fictional stories depicting suicide.