ChatGPT โขUsers Beware: Messages reporting Threats โฃto Others May Be Shared โwith Police, OpenAI Reveals
SAN FRANCISCO – OpenAI, the creator of the popular ChatGPT chatbot, has revealed that conversations flagged as potential threats toโ others are โreviewed by human โขmoderators and, in serious cases, can be reported to law enforcement. The disclosure,detailed in a recent โcompany blog post,underscores the limitations of AI safetyโข measures โขand highlights that โขuser privacy isn’t absolute when potential real-world harm is โขdetected.
While ChatGPT is designed to offer supportive responses to users expressing suicidal thoughts, directing themโ to resources like the 988 hotline in the US and Samaritans in the UK, OpenAI maintains a distinct approach to threats against others. according to the company, such instances trigger a specialized review process. Trained moderators examine the chat, and if an “imminent โthreat” is identified, OpenAI may contact authorities. โAccounts involvedโ in making threats can also be permanently banned.
OpenAIโฃ acknowledged vulnerabilities in its safety systems, notingโ that safeguards are more effective in shorter exchanges and can weaken over extended conversations. โThe company stated it โis working to improve consistency and prevent gaps โฃin protection across multiple interactions.
Beyond threats, OpenAI is also developing โคinterventions for other risky behaviors, including extreme sleep deprivation and hazardous stunts, aiming to โฃgroundโ users in reality โขandโค connect them with professional โขhelp. future plans include parental โขcontrols for teen users and options to link users with trusted contacts โor licensed therapists.
The company emphasizedโ that users โshould be aware that conversations are not โentirely private, andโค messages indicating potential danger to others will be subject to review byโฃ trained moderators and could lead to police intervention.