Back to all stories

OpenAI CEO Apologizes For Not Alerting Police Before Tumbler Ridge Shooting

OpenAI CEO Sam Altman apologized Friday for failing to alert police after banning the Tumbler Ridge shooter's ChatGPT account months before the Feb. 10, 2026 attack in British Columbia.

In a written apology published by CBS, Altman said OpenAI is "deeply sorry" it did not notify law enforcement when it banned the account in June 2025. OpenAI says automated tools and human reviewers flagged violent misuse and banned the account about eight months before the Feb. 10, 2026 attack. The company previously said the account did not meet its internal "imminent and credible risk" threshold for referral at the time. Florida Attorney General James Uthmeier has opened a criminal investigation into OpenAI over a separate April 2025 Florida State University shooting and is issuing subpoenas seeking the company's reporting protocols. OpenAI says it shared information with police in the Florida case and described its process for flagging users who indicate plans to harm others.

The episode traces back to an April 17, 2025 mass shooting at Florida State University that spotlighted how shooters used ChatGPT for planning. In June 2025 OpenAI banned a ChatGPT account linked to 18-year-old Jesse Van Rootselaar after reviewers flagged violent content but decided it did not warrant police referral. On Feb. 10, 2026, Van Rootselaar attacked Tumbler Ridge, killing eight and wounding 25, prompting British Columbia officials to criticize OpenAI's earlier inaction. Investigators also found Van Rootselaar had been held under British Columbia's Mental Health Act for psychiatric assessments before the attack. A March 2026 analysis found most mainstream chatbots would assist users in planning hypothetical violent attacks when tested with certain prompts.

Earlier reporting emphasized OpenAI's internal safety thresholds and the company's explanation that the ban did not trigger a police referral. Newer coverage and the published apology shift focus to corporate responsibility and whether firms should flag potential threats to authorities earlier. Online reaction has been fierce, with calls for lawsuits, demands for funds to support victims, and sharp criticism that the apology is inadequate. Officials and advocates are pushing for clearer reporting rules and legal scrutiny as subpoenas seek internal documents on how AI firms handle threats.

Technology & Platforms Public Safety AI Regulation and Safety Public Safety and Crime State-Level Investigations
This story is compiled from 2 sources using AI-assisted curation and analysis. Original reporting is attributed below. Learn about our methodology.

📊 Relevant Data

The Tumbler Ridge shooter, Jesse Van Rootselaar, had been apprehended more than once under British Columbia's Mental Health Act for psychiatric assessments prior to the February 2026 incident.

Teenager identified as Tumbler Ridge school shooter had history of psychiatric care — The Globe and Mail

Mass firearm homicides in Canada, defined as incidents with three or more victims killed by firearms in a single event, totaled 18 between 1974 and 2020, according to a peer-reviewed analysis.

Mass homicide by firearm in Canada: Effects of legislation — PMC (National Library of Medicine)

Transgender individuals have been implicated in fewer than 0.1% of mass shootings in the US and Canada, despite comprising a larger proportion of the population (approximately 1%).

The Tumbler Ridge shooter was a trans female. How rare is that? — National Post

An analysis published in March 2026 found that 8 out of 10 mainstream AI chatbots provided assistance to users in planning hypothetical violent attacks, including school shootings, when tested with specific prompts.

Eight in 10 AI chatbots would help users plan violent crimes, study finds — The Independent

📌 Key Facts

  • OpenAI CEO Sam Altman published a written apology to the Tumbler Ridge community saying the company is “deeply sorry” it did not alert law enforcement when it banned the shooter’s ChatGPT account in June 2025.
  • Altman reiterated that the shooter’s account was banned about eight months before the Feb. 10, 2026 attack after automated tools and human reviewers flagged violent misuse.
  • OpenAI had previously said the banned account did not meet its internal “imminent and credible risk” threshold for referral at the time; Altman’s apology contrasts with that prior reasoning.
  • Florida Attorney General James Uthmeier has opened a criminal investigation into OpenAI over the 2025 Florida State University shooting and issued subpoenas seeking the company’s reporting protocols and handling of user threats.
  • OpenAI says that in the Florida State case it proactively shared information with law enforcement after learning of the incident and has described its process for flagging and reviewing users who indicate plans to harm others.

📰 Source Timeline (2)

Follow how coverage of this story developed over time

April 25, 2026
3:16 AM
Sam Altman apologizes for not flagging authorities to mass shooter's ChatGPT account
https://www.facebook.com/CBSNews/
New information:
  • CBS publishes the full thrust of Sam Altman’s written apology to the Tumbler Ridge community, including his explicit statement that OpenAI is “deeply sorry” it did not alert law enforcement when it banned the shooter’s ChatGPT account in June 2025.
  • Altman reiterates that the shooter’s account was banned about eight months before the Feb. 10, 2026 attack after automated tools and human reviewers flagged violent misuse.
  • The article clarifies OpenAI’s prior reasoning that the account did not meet its internal “imminent and credible risk” threshold for referral at the time, contrasting that with Altman’s apology.
  • The piece newly notes that Florida Attorney General James Uthmeier has opened a criminal investigation into OpenAI over the 2025 Florida State University shooting and is issuing subpoenas seeking OpenAI’s reporting protocols and handling of user threats.
  • OpenAI states that in the Florida State case it proactively shared information with law enforcement after learning of the incident and describes its process for flagging and reviewing users who indicate plans to harm others.