OpenAI flagged, banned Canada suspect’s account 8 months before mass shooting

Sign up now: Get ST's newsletters delivered to your inbox

Police revised the death toll for the Tumbler Ridge high school shooting down to nine, from the initially reported 10.

People gathering outside a building after a suspect opened fire at a high school in Tumbler Ridge, Canada, on Feb 10.

PHOTO: REUTERS

Google Preferred Source badge

MONTREAL – OpenAI flagged and banned the suspect in one of Canada’s

worst-ever mass shootings

for violating ChatGPT’s usage policy in June 2025, without referring her to the police.

The artificial intelligence company said that the suspected killer –

Jesse Van Rootselaar

– had an account that was detected about eight months ago by systems that scan for misuse, including the possible furthering of violent activities.

The Canadian police alleged that the 18-year-old killed eight people and injured about 25, before taking her own life in the remote western Canadian town of Tumbler Ridge earlier in February.

OpenAI identified an account associated with Van Rootselaar about eight months ago, with tools to detect misuse of its AI models to further violent activities, and banned it, the company said. 

The Wall Street Journal first reported OpenAI’s identification of Van Rootselaar, citing anonymous sources as saying that the alleged killer “described scenarios involving gun violence over the course of several days”, which triggered an internal debate among roughly a dozen staffers, some of whom urged leaders to alert the police, the report said. 

OpenAI said it considered referring the account to law enforcement at the time, but did not identify credible or imminent planning and determined it did not meet the threshold. After the shooting, the company contacted the Canadian authorities.

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said by e-mail. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.” 

The company said it trains ChatGPT to discourage imminent real-world harm. BLOOMBERG

See more on