· 90 comments · Save ·
Questions & Help Mar 15, 2026 at 2:15 AM

Family of surviving victim of Tumbler Ridge shooting brings lawsuit against OpenAI

Posted by Spare_Prize_5510


https://halifax.citynews.ca/video/2026/03/09/family-of-surviving-victim-of-tumbler-ridge-shooting-brings-lawsuit-against-openai/

🚩 Report this post

90 Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
ddiiibb Mar 15, 2026 +607
"The family of a surviving victim of the Tumbler Ridge shooting is bringing a lawsuit forward against the creators of ChatGPT. The lawsuit describes OpenAI's conduct as reprehensible and morally repugnant. Kurt Black reports." Title says it all pretty much. Smallest article ever.
607
Tomgobanga Mar 15, 2026 +38
It was a video
38
Excellent_Set_232 Mar 16, 2026 +5
For some reason I read this comment in the same tone that I read “and they were roommates!”
5
Choice_Pomelo_1291 Mar 16, 2026 +7
This AI generated statement about AI being bad is surprisingly brief.
7
Totheendofsin Mar 15, 2026 +686
AI and OpenAI specifically already has a double digit body count, in a sane world we'd have regulated it out of existence by now
686
hitemplo Mar 15, 2026 +281
There’s also a lawsuit against Google’s [Gemini](https://www.courthousenews.com/wp-content/uploads/2026/03/gavalas-google-chatbot-lawsuit.pdf) after a guy killed himself last October. He had also almost killed other people because he believed in some huge conspiracy that Gemini had talked him into
281
deskbeetle Mar 15, 2026 +83
Claude is anthropic. Google owns Gemini 
83
hitemplo Mar 15, 2026 +27
Yeah my bad, it’s in the documents I linked to
27
lolofaf Mar 15, 2026 +46
For the record, Google owns and maintains Gemini. Claude is built and run by Anthropic. Other LLMs: ChatGPT is OpenAI Grok is XAI/xitter/spacex/musk Llama is Facebook Nemotron is Nvidia Deepseek, qwen, kimik2, and a few others are random Chinese startups and universities
46
hitemplo Mar 15, 2026 +13
Sorry yeah I meant Gemini, I have the flu and my brain is slow lol. It’s in the documents I linked to
13
Mrhiddenlotus Mar 15, 2026 +35
People can sue over anything at all. Gemini referred that guy to seek help, gave him suicide hotlines, clarified multiple times that it was engaging in roleplay the user demanded. There should more guardrails but you ultimately have to recognize that people are ultimately responsible for their own decisions.
35
hitemplo Mar 15, 2026 -10
Did you read the documents? It did none of those things, it’s clearly stated in the opening paragraphs and it goes on to detail exactly how it happened I don’t know if you’re getting this one confused with another one but the entire reason they’re suing is because no guard rails were put up, it didn’t refer him to suicide help even when he expressed doubt about killing himself, and it denied it being a role play when he asked directly
-10
Mrhiddenlotus Mar 15, 2026 +15
You mean the legal documents from the suing party that have an interest in portraying the situation in the worst possible light? The ones that are alleging something, not proving anything?
15
hitemplo Mar 15, 2026 -5
Where are you getting your source for this from? I’m not looking to have a brawl about it, it isn’t my personal to me or anything. Could you link to where you’ve found out that Gemini did do these things? The documents reference the chat itself, so I’d like to have a look at where you’ve found different information so I can update my own information about the case?
-5
Mrhiddenlotus Mar 15, 2026 +11
It was Google's response to the lawsuit. Wouldn't make a lot of sense to lie about the chats when they know it's going to come out in evidence. https://www.insurancebusinessmag.com/us/news/risk-management/google-sued-over-killer-ai-claims-567410.aspx
11
hitemplo Mar 15, 2026 +1
Interesting, thanks. The section of this article about what Google has said is quite short and they did say that ‘it isn’t perfect and does make mistakes’. I’ll be interested to see their formal response and what new information about the chat they bring with it
1
stuntobor Mar 15, 2026 +3
The article is getting paid for people to click on it. Fear gets more clicks than answers. More clicks = more ad revenue.
3
hitemplo Mar 15, 2026 +3
Yep. That’s why I linked to the source documents in my original comment. I just have an interest in these cases and how they pan out - gotta get news somehow haha.
3
spork_master_funk Mar 16, 2026 +1
You are dismissing the validity of an entire lawsuit based on this: >Google strongly disputes the characterization of its system, saying it neither promotes self‑harm nor condones real‑world violence. >“Gemini is designed not to encourage real-world violence or suggest self-harm,” Google spokesperson Jose Castaneda said in a statement. He added that the company dedicates “significant resources” to handling difficult conversations and has built safeguards that are supposed to guide distressed users toward professional support. >Castaneda said that in Gavalas’ case, the chatbot “clarified that it was AI and referred the individual to a crisis hotline many times,” and emphasized that “unfortunately, AI models are not perfect.” >The company did not address specific allegations about the Miami incident or the claimed failure of escalation protocols, but said it takes the issues raised in the lawsuit “very seriously” and continues to improve its systems. You can understand that a lawsuit is obviously a very slanted telling of events, but this minimal and equally-slanted statement from the entity being sued is enough for you to accept?!? I think some critical thinking adjustments might be in order!
1
Mrhiddenlotus Mar 16, 2026 +3
Not a lawyer, couldn't say the validity of a lawsuit. I'm simply stating the facts.
3
spork_master_funk Mar 16, 2026 -2
Nah, you're shilling. You have no reason to pick one story over the other but somehow you still do.
-2
lkl34 Mar 15, 2026 +3
Wow that is fucked up
3
CheckMateFluff Mar 15, 2026 +91
We don't even really regulate guns, and it kills children by the truckload. It's not surprising.
91
Thespaceman007 Mar 15, 2026 +13
Where the shooting occurred (Canada) yes the f*** we do
13
IMOBY_Edmonton Mar 15, 2026 +55
As much as the US needs to better regulate firearms, this shooting occurred in Canada, not the US.
55
[deleted] Mar 15, 2026 -66
[deleted]
-66
Malforus Mar 15, 2026 +58
My brother in Christ guns are the leading cause of death for children on the us since 2020. Yup COVID wasn't it, it was bullets.
58
WazWaz Mar 15, 2026 +8
That would be 5 million deaths (given 50,000 gun deaths) due to those?
8
Worldly_Anybody_9219 Mar 15, 2026 +7
It seems like it wouldn't be hard for ChatGPT to ban users who start talking about personally using guns, violence, and/or suicide? People who are feeling suicidal need to talk to a real therapist or any real person at all, not a chatbot.
7
Brilliant_Quit4307 Mar 16, 2026 -3
Therapists are expensive and often completely out of reach to people, especially people in poverty or who have addictions and desperately need therapy but can't afford it. I don't think that chatbots should replace therapists. Like obviously if you can access a therapist, you should, but that's not an option for a lot of people and it's a resource that isn't always available. I've personally found chatbots to be quite useful for when I'm spiralling at 3am and my therapist is obviously and understandably unavailable. So for many people, a therapist isn't always an option and a chatbot is absolutely 100% better than nothing. Banning someone for talking about suicide is just taking away a resource from someone who really needs it, and it is far more likely to push them over the edge than to push them towards help. And before anyone comments about those shitty 24 hour free helplines - have you ever tried calling them yourself? If you did, you'd probably realise how useless they are and how cold and empty that advice feels for someone in a crisis. People often need several sessions over weeks before they are comfortable talking to someone about their trauma and can't comfortably do that over the phone with a stranger. In my experience, a chatbot is infinitely better than those shitty helplines.
-3
Notten Mar 15, 2026 +4
And charged those in charge as accomplices in murder. If someone on the internet can get jail time for making someone commit suicide, corporations should be held to the same criminal standard. Leaders need to be persecuted when they dodge safety barriers.
4
GirlNumber20 Mar 15, 2026 -4
Wait until you find out how many people get killed by cars despite regulation!
-4
lolofaf Mar 15, 2026 -20
I'll probably be down voted for saying this because it's nuanced but there's a real discussion to be had about how much AI is to blame VS the users. If I take a car and plow it into a crowd of people, you can't really blame the car maker even though they sold you the car and told you how to drive it. Gun makers don't get sued after mass shootings. Google doesn't get sued as a search engine when someone searches (or watches YouTube how to videos) of how to commit arson, or dispose of a body, or whatever. At what point do we blame AI or the AI companies, and at what point should the blame and responsibility be solely on the user? I'm not presenting an answer here simple posing the question. It gets trickier when access to open source LLMs without guardrails is pretty easy right now anyways. There's also a whole nother discussion about data privacy and whether your AI chats should even be able to be logged, tracked, and sent into police that somewhat overlaps into this disuccsion as well. Again, not providing any answers, simply posing questions that should be discussed in connection to tragedies like this.
-20
SpaceDounut Mar 15, 2026 +25
Of the things you listed only one was repeatedly actively verbally encouraging its users to bring harm to themselves and others. Coincidentally, the same thing's creators have admitted to having only limited control over it's outputs, on account of it functionally being a black box at this point.
25
Mrhiddenlotus Mar 15, 2026 +5
Yes after being requested repeatedly to engage in that, ignoring all the warnings it gave
5
SpaceDounut Mar 15, 2026 -2
This doesn't disprove my point. Just because you need to make an effort to goad an LLM into this rhetoric doesn't mean it should be acceptable.
-2
Mrhiddenlotus Mar 15, 2026 +5
Effort to goad, ignore all warnings, ignore all referrals to help, it's a machine. If you ignore the warnings of heavy machinery usage and get hurt, that's on you.
5
SpaceDounut Mar 15, 2026 -1
You should read the root comment that I was replying to before typing
-1
scrapper Mar 15, 2026 -3
It’s “its”.
-3
SpaceDounut Mar 15, 2026 +2
It's obviously a typo on a phone keyboard. Any constructive commentary?
2
Ok-Secretary455 Mar 15, 2026 +19
YouTube actively removes videos on how to commit crimes. Specifically because they don't want to get sued for having them up. Gun manufacturers don't get sued after mass shootings because there is a federal law saying you can't and if you do you have to pay their legal fees. And theres a difference between me going "hey I want to burn this down" and this humanoid thing telling me "dude you should totally go burn that thing down"
19
honor_and_turtles Mar 15, 2026 +9
I think you have a point. But I'd say AI/ AI companies carry a larger share of the blame because the LLM's respond. I'm obviously putting out an extreme hypothetical out here as an example but the gist is what I'm trying to get at. For example like if you ask a car "Hey, should I run over that pedestrian for yelling at me?" It probably wouldn't respond and thus the onus would be on you. But if the LLM say's "Of course you're absolutely correct! Here is why: They shouldn't be yelling at you in the first place" It is an affirmation and support that objects and things normally wouldn't give. Hell, even a gun, a leading cause of death by action to others. This applies. A person whispering to a gun "Should I do it?" and the gun won't respond. But to the AI, it's "You're absolutely right. Here is why:..." Is a level of culpability that can be definitively seen. Now, should it be seen that way and what should be done? That's a different discussion entirely. But in this case, there's not a lot of room for nuance at least when it comes to why the AI/company should receive blame alongside the end user. Whereas material car and gun companies aren't affirming the idea that doing something heinous is okay. \*Edited to clarify that guns cause death to others instead of just generically causing death because that's clearly the very most important thing to clarify in an AI debate. Since AI can also affirm someone planning to do death by cancer and stroke on... themselves? I mean if they have a barrel of radioactive material that they wanted to scatter in a public area, to bring about public cancer, I suppose. Fair enough.
9
non_hero Mar 15, 2026 +3
Guns are not the leading cause of death. United States (2024-2026 Estimates) 1. Heart disease 2. Cancer 3. Unintentional injuries (Accidents) 4. Stroke 5. Chronic lower respiratory diseases 6. Alzheimer's disease 7. Diabetes 8. Kidney disease 9. Chronic liver disease 10. Suicide
3
hexagonbest4gon Mar 15, 2026 +1
If there is a great deal of harm letting an unchecked population have free access to it, there should be restrictions on the technology. Gun makers don't get sued after mass shootings, but you don't see guns being offered with five free bullets with more locked behind an account. Meanwhile, everybody and their mums are looking to infuse AI into everything. AI art, AI music, AI accounting, AI healthcare, AI lawyers.... We've seen services like Grok be used to create [CSAM and non-consensual images of naked women by the millions](https://www.theguardian.com/technology/2026/jan/22/grok-ai-generated-millions-sexualised-images-in-month-research-says), but the only restrictions put on it was a paywall. We know it's still being used by many users for the exact same stuff, but how many users will we say should we slap charges on before we consider turning off faucet? How can Canadians force US private company TO shut off the faucet when we won't even have these conversations? We've seen hundreds of AI images breaking copyrights of many an artist, company, copyright holder, and yet there's only been two lawsuits on a few companies for that, we know for a fact that[ Meta's AI training destroyed millions of books and stole millions more](https://www.vanityfair.com/news/story/meta-ai-lawsuit) but it's not like that's stopped other companies from mass collecting and destroying books. How can we tell private companies to stop burning books when we can't even know what they've stolen? We've also seen AI chatbots ruin lives and marriages, including the current Gemini case, but [Google was ALSO involved in the Character.AI lawsuits](https://www.yahoo.com/news/articles/google-chatbot-start-settle-teen-163747263.html) that lead CHILDREN and teenagers to self-terminate. Five families lost their children in that article and they WERE sued for it, they just refused to comment or admit any liability. Which is what we've seen across the board EVERY time a company has a lawsuit, because that's the legally smart move. It's also not very helpful for the rest of us. How many times should we look at individual cases as its rampant use and adoption across the board continues before we say enough is enough? Because everyone seems to think that AI is the next best thing to add to their business, and the consequences are catching up fast. The question is if we'd actually have functional legislation by that point.
1
madam_thundercat Mar 15, 2026 -4
Charles Manson didn't kill anyone, himself. *edit grammar
-4
[deleted] Mar 15, 2026 -5
[removed]
-5
Mrhiddenlotus Mar 15, 2026 +2
Why are you acting like if you ask any LLM "tell me how to kill the most people" it's going to be like "that sounds great! Here's how you do it"
2
Pardot42 Mar 15, 2026 -7
Okay, Rogan. I'm Just Asking Questions!
-7
Ryuujiend Mar 15, 2026 +89
anyone mind explaining this one?
89
Nintenuendo_ Mar 15, 2026 +417
Open ai had flagged the shooters account a week or so before the mass shooting. The shooter was entering prompts related to the murders, the location, and was planning how best to go about the shooting. This was caught and flagged by open Ai as an imminent threat. The company picked up on it and should have forwarded this to law enforcement as per their own policy, but law enforcement was never notified because it went across a humans desk and they sat on the information It came out only a week and a half after the shooting that the shooters open ai prompts were flagged at the company, and only because they were asked directly. *edit* - I've been told it was back in June when open ai flagged the initial search propositions
417
KimberlyWexlersFoot Mar 15, 2026 +109
That was in June of last year, not a week prior.
109
Nintenuendo_ Mar 15, 2026 +39
Thank you for saying that, I will edit my post at the bottom
39
mrhorus42 Mar 15, 2026 -83
Why sue the company and not the law enforcement than?
-83
Nintenuendo_ Mar 15, 2026 +103
I don't mean to come off as rude, but I explained that above if you read it before commenting - law enforcement was never told, because humans at open ai didn't follow their own policy and forward the information when their system flagged the threat for them. So basically negligence.
103
mrhorus42 Mar 16, 2026 -14
Your third paragraph is bad written and can be understood for both, company or law enforcement. 75 dislikes for asking a question 👍👍👍 What a pile of idiots
-14
Nintenuendo_ Mar 16, 2026 +12
Or..... maybe it's you, but nah, that couldn't possibly be it By the way, just as a side note, it made me chuckle when I read "your third paragraph is bad written". That honestly made my day. And no, my paragraph could not be misinterpreted like that, there was context left right and center, one would have to purposely misunderstand or just want to argue. Have a good one, best of luck to you.
12
mrhorus42 Mar 16, 2026 -4
It actually says it all that you assume I didn’t read your comment and conclude I’m commenting negatively, instead of assuming comprehension problems. It’s in the speakers interest to be understood correctly
-4
mrhorus42 Mar 16, 2026 -7
Again more rudeness? For asking a question…
-7
Nintenuendo_ Mar 16, 2026 +7
Re-read what i said, then what you said. In no way shape or form could I have been misunderstood in my third paragraph, so I just assume you're arguing for the sake of arguing because you got downvoted. Your point makes no sense, thats why you got downvoted, but you're hanging around blaming everyone else for your own thought process If everyone disagrees with what you're saying, you need to be able to admit you may be wrong, but for some reason you aren't doing that, so yes, you get sarcasm at this point
7
mrhorus42 Mar 16, 2026
Bro what? Can you please write my original comment here?
0
mrhorus42 Mar 16, 2026
Disagree with a question for clarification? You sniff too much keyboard my warrior
0
mrhorus42 Mar 16, 2026 -1
I misunderstood it, ok. How you can dare to say the opposite? I asked for clarification, your rudeness and superiority, not ok
-1
Nintenuendo_ Mar 16, 2026 +9
You should have typed "frustration" to describe it, but I'll admit you could use the word "rude" or "petty" to describe my reaction. There is literally nothing to argue about, but you kept posting then calling the thread idiots when it was you who didn't understand, now you fall back to "well how dare you, rude!!" Cmon man, get real Goodnight I'm done, you're obviously a victim :/
9
devon_devoff Mar 15, 2026 -149
my brother in christ go read about it yourself and rub those braincells together, I know you can do it
-149
CaffinatedManatee Mar 15, 2026 +82
You didn't read it did you. If you had, you'd realize the linked article has absolutely no details to offer
82
chemical_outcome213 Mar 15, 2026 -69
It has a video.
-69
CaffinatedManatee Mar 15, 2026 +54
They literally said "read about it yourself"
54
thisisamessy Mar 15, 2026 +12
I mean. He did also say rub those braincells together. So...
12
lkl34 Mar 15, 2026 +48
Good shutdown all the AI platforms mentally ill people should not be using ai at all.
48
Rogaar Mar 16, 2026 +11
This really is a race to the bottom. Soon America will be calling it the War on AI, because everything to them is a war.
11
Professional_Bat9174 Mar 16, 2026 +11
Except our actual wars. We tend to call those things like "Police Actions" "Intervention" etc
11
Rogaar Mar 16, 2026 +3
Or "Special Military Operations"...
3
durntaur Mar 16, 2026 +1
Getting to the Butlerian Jihad real quick.
1
DamGoodAnimation Mar 16, 2026 +1
Mmm, spice.
1
_Litcube Mar 15, 2026 -9
Blaming AI for this is f****** stunned. There's scores of other root causes that need to be addressed and we're going to sue the latest trendy scapegoat. What about solving the mental health crisis? How did she get access to those fire arms? Why didn't our society put the picture together sooner? RCMP visits priort. But, no. U.S. software company did this. Edit: I'm an idiot and didn't watch the video.
-9
pyrotechnicmonkey Mar 16, 2026 +19
They’re not saying they shut up those people, but they had to legal responsibility to report to law-enforcement that they were a clear danger after being red flag by the software and by the humans who look over those red flags. They were making specific threats and plants to commit a mass shooting and higher ups at the company refused to pass along the information to law-enforcement.
19
_Litcube Mar 16, 2026 +12
Holy shit. Human beings knew about the planned shooting prior to, and decided NOT to forward the content? I admit I didn't watch the video, because video and not article. If so, I'm an idiot, and will edit my comment to say so. But I can't watch the video now.
12
filteredshot Mar 16, 2026 +13
I gotta say, it's refreshing to see someone admit they were wrong and change their opinion after being presented with new information. Good for you.
13
[deleted] Mar 15, 2026 -10
[deleted]
-10
MadRoboticist Mar 15, 2026 +8
Why do you think individual states can't regulate AI? In fact I think almost every state introduced some form of legislation that applies some AI-related regulation in the last couple years. Some active states, like California, are actually adding quite a bit of legislation regulating AI companies.
8
[deleted] Mar 15, 2026 -1
[deleted]
-1
MadRoboticist Mar 15, 2026 +7
No idea where you're getting that language as it's not in the BBB. Trump signed an executive order in December that says something along those lines, but an executive order has absolutely no power to put restrictions on states, so responsible states are continuing to work on regulations.
7
WarOtter Mar 15, 2026 +4
I forget who, but someone tried to slide that rule into the BBB before it left the Senate. They were forced to take it out before it could be passed back down to the house.
4
MadRoboticist Mar 15, 2026 +2
And if I remember correctly it was removed with a 99-1 vote, so pretty much no one wanted it.
2
censuur12 Mar 17, 2026
So to all the anti-AI nuts here: sit down. This is about how AI properly identified and notified the people at OpenAI of the imminent risk and those people lapsed in their duty to report it. Nothing here is reflective of some AI issue, get your damn heads on straight.
0
acecombine Mar 15, 2026 -121
every decade has its scapegoat for gun violence...
-121
[deleted] Mar 15, 2026 -209
[removed]
-209
Greencreamery Mar 15, 2026 +48
Yes, thank goodness they gave us MAID. Not sure what that has to do with this case, but I’m very thankful they legalized it.
48
Craico13 Mar 15, 2026 +39
It amazes me that allowing other people the choice to die with dignity upsets some people. Don’t like MAID? *Cool…* Don’t use it and then mind your own business while living your own life.
39
PrairiePopsicle Mar 15, 2026 +17
It's really common that those against it are, in general, right wing. They don't want to discuss improving conditions, society, rules, regulations, the things that drive people into situations where they would want MAID but shouldn't really have to access such a thing. The existence of MAID cuts the legs out from under the conservative worldview (hierarchy as natural and "good") in extremely fundamental ways.
17
← Back to Board