OpenAI CEO Sam Altman has apologized to members of a Canadian neighborhood the place a mass shooting occurred earlier this 12 months for not flagging the ChatGPT account of the shooter to legislation enforcement.
“The ache your neighborhood has endured is unimaginable,” Altman wrote in a letter shared Friday on social media by the British Columbia Premier David Eby. “I’ve been pondering of you usually over the previous few months.”
Eight folks have been killed within the Feb. 10 bloodbath within the small neighborhood of Tumbler Ridge in northeast British Columbia. Six folks have been fatally shot when 18-year-old Jesse Van Rootselaar opened hearth at Tumbler Ridge Secondary Faculty, authorities stated, and the shooter’s mom and 11-year-old brother have been killed at a close-by residence. Van Rootselaar died of a self-inflicted gunshot wound, officers stated.
Altman wrote within the letter, dated Thursday, that Van Rootselaar’s ChatGPT account had been banned in June 2025 — about eight months previous to the taking pictures.
“I’m deeply sorry that we didn’t alert legislation enforcement to the account that was banned in June,” Altman stated.
In February, OpenAI instructed CBS Information that Van Rootselaar’s account had been flagged final 12 months by automated abuse detection instruments and human investigators that establish potential misuses of ChatGPT for violent actions. OpenAI stated the account was then banned for violating its utilization insurance policies.
OpenAI stated that the corporate had weighed whether or not to flag the account to legislation enforcement, however had decided on the time that it didn’t pose an imminent and credible threat of significant bodily hurt to others, failing to fulfill the brink for referral.
“Our ideas are with everybody affected by the Tumbler Ridge tragedy,” OpenAI stated in an announcement to CBS Information in February following the taking pictures. “We proactively reached out to the Royal Canadian Mounted Police with data on the person and their use of ChatGPT, and we’ll proceed to help their investigation.”
OpenAI says that ChatGPT is educated to discourage real-world hurt, and is instructed to refuse to assist when it detects a bootleg intent. Customers that point out plans to hurt others are flagged to human reviewers who decide whether or not a case poses an imminent risk of bodily hurt and ought to be referred to legislation enforcement, in response to the corporate.
Altman wrote in his letter that OpenAI will stay targeted on preventative efforts “to assist guarantee one thing like this by no means occurs once more.”
“I need to specific my deepest condolences to the complete neighborhood,” Altman stated. “Nobody ought to ever should endure a tragedy like this.”
Earlier this week, Florida Lawyer Normal James Uthmeier announced a legal investigation into OpenAI after reviewing messages between ChatGPT and a Florida State College pupil accused in an April 2025 campus shooting that killed two folks and wounded a number of others.
Uthmeier stated his staff decided that ChatGPT supplied “important recommendation” to the alleged shooter. His workplace is issuing subpoenas to OpenAI requesting data of the corporate’s protocols for reporting attainable crimes to legislation enforcement, and its dealing with of consumer threats.
Concerning the Florida taking pictures, an OpenAI spokesperson stated in an announcement to CBS Information Tuesday that “after studying of the incident,” it “recognized a ChatGPT account believed to be related to the suspect and proactively shared this data with legislation enforcement.”