OpenAI Confirms ChatGPT Data Sharing wiht Law Enforcement in High-Risk Cases
San Francisco, CA - September 2, 2025 - OpenAI has confirmed it monitors user conversations within ChatGPT and, in specific instances, shares data directly with law enforcement agencies. the disclosure, published on the company’s official blog, centers on instances where user interactions suggest potential for serious harm.The company stated that when human moderators identify conversations indicating imminent plans for self-harm or harm to others, they may escalate the case to authorities.”If the evaluators determine that there is an imminent threat, we can send the case to law enforcement,” OpenAI explained. Accounts flagged in such cases are also subject to suspension or permanent closure.
the revelation has sparked debate regarding user privacy and OpenAI’s commitment to confidentiality, especially given CEO Sam altman’s previous comparisons of ChatGPT to confidential professional services like therapy or legal counsel. Blogger Charles McGuinness drew parallels to previous disclosures by Edward Snowden regarding tech company collaboration with government entities,stating,”It is not paranoid to believe that ChatGPT also sends sensitive content to the authorities.”
Critics argue OpenAI faces a fundamental conflict between protecting user privacy and its responsibility to intervene in potentially dangerous situations. The company is navigating a complex landscape of ethical obligations as it balances innovation with public safety.