
Months before a tragic school shooting in Tumbler Ridge, British Columbia, OpenAI employees were involved in intense discussions about whether to inform Canadian authorities about alarming activity on ChatGPT. According to a report by the Wall Street Journal, roughly a dozen staff members deliberated over whether to notify the police about a user who had been repeatedly describing scenarios of gun violence. These concerning interactions were flagged by OpenAI’s automated systems and reviewed by staff, but the decision was made not to report them to law enforcement. A company spokesperson explained that the user’s messages did not meet the necessary criteria for a 'credible and imminent risk of serious physical harm,' leading to the choice of merely blocking the account instead.
Before Jesse Van Rootselaar became the main suspect in the school shooting incident, she had engaged in troubling conversations with ChatGPT in June 2025. OpenAI's models are designed to discourage discussions promoting real-world violence. When users express harmful intentions, these are flagged for human review, and law enforcement can be contacted if there is a significant threat. Despite internal concerns about the potential for real-world violence, the decision was made not to involve Canadian police at that time. OpenAI later reached out to the Royal Canadian Mounted Police (RCMP) after the attack occurred and is currently cooperating with the ongoing investigation.
Van Rootselaar's digital activities extended beyond ChatGPT. On the gaming platform Roblox, she reportedly participated in simulations of mass shootings and engaged in discussions about gun-related YouTube videos. On February 10, she was found deceased at the scene of the shooting, apparently due to a self-inflicted wound, after killing eight individuals and injuring at least 25 more. The RCMP identified the 18-year-old as the shooter.
The incident highlights the challenging balance AI companies face between respecting user privacy and ensuring public safety. OpenAI's decision-making process in this case underscores the complexities involved in assessing and acting upon digital warning signs of potential violence.