OpenAI revealed this week that Jesse Van Rootselaar, the 18-year-old who killed eight people and herself in Tumbler Ridge, British Columbia, had a second ChatGPT account created after her initial access was revoked in June 2025 for concerning posts about gun violence. The disclosure, made in a letter from OpenAI’s Vice-President of Global Policy, Ann O’Leary, to Canadian Artificial Intelligence Minister Evan Solomon, has prompted renewed scrutiny of the company’s safety protocols and a commitment from the Canadian government to investigate the tragedy.
According to the letter, which was shared with media outlets, the second account was flagged to police after Van Rootselaar’s identity became public. O’Leary stated that OpenAI would have flagged the shooter’s original account to law enforcement under updated safety policies implemented “several months ago.” These new policies incorporate input from mental health and behavioural experts, and broaden the criteria for potential risk assessment, recognizing that users may not explicitly detail plans for violence but still present a potential threat.
The revelation comes as Canadian officials seek answers regarding OpenAI’s handling of Van Rootselaar’s online activity. Minister Solomon has summoned company officials to Ottawa to discuss their safety measures, following criticism that the platform may have missed opportunities to prevent the mass shooting. Van Rootselaar began her attack by killing her mother and sibling at home before proceeding to shoot an educator and five students, with two others sustaining serious injuries.
The Royal Canadian Mounted Police (RCMP) investigation remains active, and some details are subject to legal and court processes. However, authorities have confirmed that guns were previously removed from Van Rootselaar’s home, only to be returned at a later date. Police were also aware of her history of mental health issues, adding complexity to the assessment of preventative measures.
Experts in criminology suggest that even as increased scrutiny of AI platforms and social media is necessary, the tragedy may also highlight failures in existing systems. Patrick Watson, a criminology professor at the University of Toronto, noted the case “was clearly a household where there were many problems,” but also emphasized the demand for greater accountability from companies developing these new platforms.
British Columbia Premier David Eby has committed to a public inquiry into the mass killing, seeking to understand the factors that contributed to the tragedy and identify potential areas for improvement in safety and prevention. OpenAI’s updated protocols, described by O’Leary, include a more flexible referral criteria to account for potential risks even when explicit details of planned violence are absent from user conversations.
The incident has drawn comparisons to other cases where interactions with chatbots have been scrutinized for potentially foreshadowing or encouraging violence. The Canadian government has not yet indicated whether it will pursue regulatory changes for AI companies operating within its borders, but the ongoing investigation and the scheduled meeting with OpenAI officials suggest a heightened focus on the role of artificial intelligence in public safety.