OpenAI outlines steps to boost safety measures in response to Canada school shooting

Sign up now: Get ST's newsletters delivered to your inbox

OpenAI had earlier said it banned the account of alleged shooter Jesse Van Rootselaar due to misuse, but did not provide further details.

A makeshift memorial on Feb 12 for the victims of a deadly mass shooting that took place at a school in the town of Tumbler Ridge, British Columbia, Canada.

PHOTO: REUTERS

Google Preferred Source badge

TORONTO – OpenAI said on Feb 26 it will set up a direct point of contact with Canadian law enforcement and improve detection of repeat violators of its “violent activities” policy to boost safety protocols in the wake of

a recent school shooting

.

The ChatGPT maker detailed the steps in a letter to Canadian Minister of Artificial Intelligence and Digital Innovation Evan Solomon.

OpenAI vice-president of global policy Ann O’Leary wrote the letter after Canadian ministers this week

urged the company to boost its safety protocols

quickly and warned that Ottawa would effect change through legislation if it did not.

“We remain committed to cooperating with law enforcement authorities on the investigation into the Tumbler Ridge tragedy, and we are committed to an ongoing partnership with federal and provincial governments,” Ms O’Leary said, referring to the town in British Columbia where the shooting occurred.

Ottawa is reviewing OpenAI’s letter and will comment in the coming days, a spokesperson for Mr Solomon said.

Canadian ministers summoned OpenAI’s safety team for talks this week after the company said it had not contacted police about an account belonging to the alleged shooter, Jesse Van Rootselaar, that

it had banned

.

Van Rootselaar, 18, is suspected of killing eight people on Feb 10 before taking her own life in Tumbler Ridge. OpenAI said it had banned her ChatGPT account in 2025 for policy violations.

The company said the account was flagged by systems that identify “misuses of our models in furtherance of violent activities” but did not provide further details.

OpenAI said the issues did not meet its internal criteria for reporting to law enforcement.

Ms O’Leary said on Feb 26 that under the company’s “enhanced law enforcement referral protocol”, it would have referred the initial account ban in June to police if it were discovered now.

She also said the company had discovered that Van Rootselaar had used a second account, which it shared with law enforcement.

“We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritise identifying the highest-risk offenders,” Ms O’Leary said.

The company also committed to periodically assessing the thresholds used by its automated systems for identifying potential violent activities by users.

Crime experts have noted that while greater scrutiny of AI platforms and social media is necessary, the police or other authorities may have missed additional chances to avert one of Canada’s worst mass killings.

The police said Van Rootselaar had a history of mental health problems and that they had removed and later returned guns from her home. REUTERS

See more on