Report: OpenAI Staff Flagged Canadian Mass Shooter’s Violent ChatGPT Use but Firm Didn’t Alert Police
7d
1
A Wall Street Journal report, summarized by Fox News, says about a dozen OpenAI employees knew months in advance that Canadian teenager Jesse Van Rootselaar was repeatedly using ChatGPT to role‑play gun violence, after the company’s automated review system flagged his prompts, but the firm chose not to notify law enforcement. OpenAI policy requires contacting authorities only when it believes there is an 'imminent' threat, and a spokesperson told Fox the company banned Van Rootselaar’s account in June 2025 for policy violations yet concluded his activity did not meet that threshold, citing privacy and the risks of over‑referral. On Feb. 10, 2026, the 18‑year‑old killed his mother, step‑brother, five students and a teacher at Tumbler Ridge Secondary School in British Columbia before killing himself, injuring roughly 25 others. Police had previously visited his home over mental‑health concerns, and reports describe his obsession with death, graphic violence sites, guns and hallucinogenic drugs. After the massacre, OpenAI says it proactively reached out to the Royal Canadian Mounted Police and is now assisting their investigation with records of his chatbot use. The revelations are already fueling an online backlash and regulatory calls in the U.S. over whether AI companies must adopt stricter mandatory‑reporting rules when their systems detect sustained violent ideation tied to weapons.
AI Safety and Regulation
Mass Shootings and Public Safety
OpenAI and ChatGPT