Sunday, December 7, 2025

OpenAI’s New Teen Safety Features: Age Verification & Restrictions

by Emma Walker – News Editor

OpenAI announces Teen Safety ⁣Measures for ​ChatGPT, Including Age Verification

WASHINGTON D.C. ⁢- OpenAI, the developer of ChatGPT, unveiled new safety features Tuesday aimed at protecting​ teenage users, including an age-prediction system and ID verification in select countries. The​ declaration‍ comes hours‌ before a Senate Judiciary Committee ⁢hearing examining ⁣the potential harms of AI chatbots and follows a recent lawsuit alleging ⁤ChatGPT contributed to a teenager’s suicide.

OpenAI CEO ⁢Sam Altman detailed‍ the company’s approach in a blog post,acknowledging the challenge of balancing‍ freedom and safety. “We prioritize ‍safety ahead of privacy and freedom for teens; ‍this is a new ⁤and powerful technology, and we⁤ believe minors need critically important protection,” Altman wrote.

The company is developing a system to automatically direct users to one of two versions ⁢of ​ChatGPT: one for adolescents aged 13-17, and another for adults 18⁢ and older. “If there is doubt, we’ll play it safe and default to the under-18 experience,” ⁣altman stated. ⁢ in some regions, OpenAI may ‍require ID‌ verification, recognizing it ​as “a privacy compromise for adults but believe it is a worthy tradeoff.”

Parental controls are also planned for ​release at the end of the ​month, allowing parents to customize ChatGPT’s ⁢responses to their children, including adjusting memory settings and ‌establishing “blackout hours.”

altman ⁢clarified that ChatGPT is ​not intended for users under 12, despite currently lacking ‌safeguards to prevent their access. OpenAI did not ​immediately respond to inquiries⁢ regarding children using its services.

Regarding ⁢sensitive topics, Altman indicated ​that while ChatGPT will ‌not provide instructions on how to commit ‌suicide, it ⁢will assist adult users with fictional depictions​ of suicide.If the age-prediction system flags a user‍ expressing suicidal ideation, OpenAI will ‍attempt to contact ​their parents and, if unsuccessful, will contact‌ authorities in cases of imminent harm.

Altman acknowledged potential ⁤disagreement with these measures, stating ⁣on X,‌ “I‌ don’t expect ​that everyone ⁣will agree ‍with these tradeoffs, but⁢ given⁢ the conflict it is significant to explain‌ our decisionmaking.”

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.