Families of Canadian mass shooting victims sue OpenAI, CEO Altman in US court

Sign up now: Get ST's newsletters delivered to your inbox

A person visits a growing makeshift memorial on the steps of the town hall, four days after one of the worst mass shootings in recent Canadian history, in the town of Tumbler Ridge, British Columbia, Canada February 14, 2026. REUTERS/Jennifer Gauthier

A person visits a growing makeshift memorial on the steps of the town hall in the town of Tumbler Ridge, British Columbia, Canada, on Feb 14.

PHOTO: REUTERS

Google Preferred Source badge

Family members of victims of one of Canada’s deadliest mass shootings sued OpenAI and its chief executive officer Sam Altman in US court on April 29, alleging the company identified the shooter as a credible threat eight months before the attack but did not warn police.

The lawsuits, filed in federal court in San Francisco, accuse OpenAI leaders of not alerting police because it would have exposed the volume of violence-related conversations on ChatGPT and potentially jeopardised the company’s path to a nearly US$1 trillion (S$1.3 trillion) initial public offering.

The February shooting in Tumbler Ridge, British Columbia, left nine people dead, many of them children.

An OpenAI spokesperson called the shooting “a tragedy” and said the company has a zero-tolerance policy for using its tools to assist in committing violence.

The spokesperson said the company has strengthened ChatGPT safeguards through better responses to signs of distress, improved connections to mental health support, stronger threat assessment and escalation, and enhanced detection of repeat offenders.

The cases are part of a growing wave of lawsuits accusing artificial intelligence companies of failing to prevent chatbot interactions that plaintiffs say contribute to self-harm, mental illness and violence. They appear to be the first in the US to allege that ChatGPT played a role in facilitating a mass shooting.

Mr Jay Edelson, who is representing the plaintiffs, said he plans to file another two dozen lawsuits in the coming weeks against the company on behalf of other people impacted by the shooting.

Lawsuits claim OpenAI safety team overruled

Jesse Van Rootselaar, whose interactions with ChatGPT are at the center of the lawsuits, shot her mother and stepbrother at home before killing an educational assistant and five students aged 12 to 13 at her former school on Feb 10, according to police. Van Rootselaar, who was 18, then died by suicide.

The plaintiffs include relatives of those killed at the school and a 12-year-old girl who survived after being shot three times but remains in intensive care.

According to one of the complaints, OpenAI’s automated systems in June 2025 flagged ChatGPT conversations in which the shooter described gun violence scenarios.

Safety team members recommended contacting the police after concluding she posed a credible and imminent threat of harm, said the complaint, which cites a Wall Street Journal article from February about the company’s internal discussions.

But Mr Altman and other OpenAI leadership overruled the safety team and police were never called, the lawsuit alleges. The shooter’s account was deactivated, but she was able to get a new account and continue using the platform to plan her attack, the lawsuit claims.

Following the publication of the Wall Street Journal article, the company said the account was flagged by systems that identify “misuses of our models in furtherance of violent activities” but the issues did not meet its internal criteria for reporting to law enforcement.

Last week, a local Tumbler Ridge newspaper published an open letter in which Mr Altman said he was “deeply sorry” the account was not flagged to law enforcement.

In a blog published on April 28, OpenAI said it trains its models to refuse requests that could “meaningfully enable violence”, and notifies law enforcement when conversations suggest “an imminent and credible risk of harm to others”, with mental health experts helping assess borderline cases.

The lawsuits seek an unspecified amount of damages and a court order requiring OpenAI to overhaul its safety practices, including mandatory law enforcement referral protocols.

Vancouver-based law firm Rice Parsons Leoni & Elliott, which represents the plaintiffs in Canada, said it chose to pursue the cases in California in part because of limits on damages for pain and suffering in Canada.

OpenAI faces multiple suits

The lawsuits over the Tumbler Ridge shooting come after multiple lawsuits against OpenAI have been filed in US state and federal court in recent months over claims ChatGPT facilitated harmful behavior, suicide, and, in at least one case, a murder-suicide.

While still in early phases, the lawsuits will force courts to grapple with what role an AI platform can play in promoting violence and whether the company can be held liable for its actions or the actions of its users.

OpenAI has denied the claims in the lawsuits, arguing in the murder-suicide case that the perpetrator had a long history of mental illness.

Florida Attorney-General James Uthmeier announced earlier in April a criminal investigation into ChatGPT’s role in a 2025 shooting at Florida State University.

Mr Evan Solomon, the Canadian minister in charge of AI, said after the lawsuits were filed that he is examining options for regulating AI chatbots and has been working with OpenAI to examine their safety protocols. REUTERS

See more on