· 76 comments · Save ·
News & Current Events Apr 29, 2026 at 2:48 PM

Families of Canadian mass shooting victims sue OpenAI, CEO Altman in U.S. court

Posted by Madshibs


Tumbler Ridge families seek US$1 billion in OpenAI lawsuit
CP24
Tumbler Ridge families seek US$1 billion in OpenAI lawsuit
The American lawyer representing some victims of the Tumbler Ridge, B.C., mass shooting says they will likely be seeking more than US$1 billion in their lawsuit against OpenAI and its founder Sam Altman.

🚩 Report this post

76 Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
MentalDisintegrat1on Apr 29, 2026 +679
Altman is trying to get a bill passed to give open AI immunity from damages it causes. They know how bad it can be and don't care.
679
[deleted] Apr 29, 2026 +144
[removed]
144
MentalDisintegrat1on Apr 29, 2026 +100
I believe he wants immunity as well. Dudes a sociopath he doesn't care how much pain and harm his plagernism machine causes.
100
[deleted] Apr 29, 2026 +26
[removed]
26
MentalDisintegrat1on Apr 29, 2026 +27
I mean he's already had 2 attempts on his life and Iran has kill contracts on every tech company helping the military. I'm not wishing for his death but it's a very real possibility someone successfully 187s him.
27
The100th_Idiot Apr 29, 2026 +26
Something something, consequences of your own actions, something, personal responsibility, blah blah blah. Maybe he should ask his chatgpt how to get people to like him more better 🤔 🤷
26
Takkarro Apr 29, 2026 +2
Thought the term was 86 him? Never heard 187 before
2
MentalDisintegrat1on Apr 29, 2026 +18
People get confused they think 86 means kill it doesn't it means trash it. 187 is the code for murder/homicide.
18
Takkarro Apr 29, 2026
I see, ya learn something new or revised every day 🙂
0
thederevolutions Apr 29, 2026 -5
67 (six seeeven) started off as police code for a dead body and is now every kids favorite phrase lol
-5
Thats_my_face_sir Apr 30, 2026 +2
Stahp it right now. Thats not what it means. This comment is dubiously coy. No one thought that about 86 until the recent trump controversy If you've never heard 187 you don't listen to hip hop, watch any true crime, or are younger than 18
2
Takkarro Apr 30, 2026 +1
Umm no clue what your talking about. But before this comment I've never heard the term 187, I had only ever ears the term 86. My dad would use the term from time to time, hence why I know it.
1
KDR_11k Apr 30, 2026 +1
Especially since the entire Rationalist cult thinks he's going to destroy humanity.
1
Cynical_Classicist Apr 29, 2026 +1
Well, the US elects such people to power!
1
im_just_thinking Apr 29, 2026 +5
But companies are people in the US, don't forget that
5
[deleted] Apr 29, 2026 +3
[removed]
3
im_just_thinking Apr 29, 2026 +3
Well no they make it immune so people are immune. And companies are allowed political donations just like the people, while having much bigger capital. Stop being objective and reasonable, we are talking about late stage capitalism here
3
hyper_espace Apr 29, 2026 +83
Altman is a complete sociapath, Aaron, one of listnook's founder knew.
83
Cynical_Classicist Apr 29, 2026 +20
Seems that the most powerful people are.
20
unholyswordsman Apr 29, 2026 +12
That's because no actual human with a moral compass would be so indifferent to all the destruction they cause. How ironic they have all that wealth but are completely and utterly morally bankrupt.
12
Kalslice Apr 29, 2026 +12
The obly way to become so powerful is to have a total disregard, or active contempt, towards the people harmed by your "life-changing products".
12
hyper_espace Apr 30, 2026 +1
Because these people are unscrupulous, they don't care who they hurt as long as they can fulfill their desires or ambition. They only care about themselves. Most people are scrupulous, or have morals and the people at the top know that, that's why they can guilty trip others and manipulate them. If everybody was like Altman, mankind would have vanished long ago. The question is, how do we prevent people like that to rise in power?
1
Loud-Commercial9756 May 1, 2026 +1
> The question is, how do we prevent people like that to rise in power? Given that they can spend billions of dollars figuring out how to prevent us from preventing people like them from rising to power, while all we can do is sit here and talk, that's a pretty challenging question to answer.
1
[deleted] May 2, 2026 +1
[deleted]
1
Loud-Commercial9756 May 2, 2026 +1
Consumerism is ingrained in our culture to the point of being the dominant religion and ethical system, so it will be interesting to see how/if that changes.
1
thederevolutions Apr 29, 2026 +3
Can you say more about that what do you mean ?
3
Thief_of_Sanity Apr 30, 2026 +3
He sexually abused his sister for one.
3
thederevolutions Apr 30, 2026 +3
I looked it up and right before he died he warned all of his friends saying not to trust him ever. That he would do anything. They were in the same year combinator class.
3
Thief_of_Sanity Apr 30, 2026
Who died?
0
Cynical_Classicist Apr 29, 2026 +13
The rich always wanting to be above the law.
13
Salamok Apr 29, 2026 +21
I have a new take on this... If your company is too big to fail then when the shit hits the fan your CEO should go to jail.
21
MentalDisintegrat1on Apr 29, 2026 +8
I agree if companies are considered people they should face people legal troubles including killing the company.
8
KDR_11k Apr 30, 2026 +6
IMO companies should be subjected to trade bans if anything they do carries a prison sentence normally. Yeah, it's ruinous but that's what prison is for humans.
6
the_last_0ne May 1, 2026 +1
Killing the company doesnt do shit. Purdue just got approved to dissolve. The Sacklers got to keep billions off addicting people and lying about it.
1
homesickpluto Apr 30, 2026 +3
It's true because it rhymes
3
Morat20 Apr 30, 2026 +7
The entire corporate world would have a collective o***** over AI being granted immunity, and would then promptly run every decision through an AI designed to generate *exactly* the answer they want, so they can basically ignore every law by pointing to "the AI did it, and it's immune". I mean that's the *goal*. Absolute freedom from liability, from consequences, from law.
7
TwoPoundzaSausage Apr 30, 2026 +1
That works both ways. If and individual were to incorporate, which anyone can do for a rather small fee, they could create an AI agent that actively hacked into bank accounts held by these companies, then rob them blind. They would have immunity because "it was the AI doing it."
1
Thief_of_Sanity Apr 30, 2026 +3
You think it would work for individuals and not just corporations? Oh boy.
3
Seastep Apr 29, 2026 +1
Oh they clearly care.
1
GreyBeardEng Apr 30, 2026 +1
Criminal. But not civically. Then you sue and have your lawyer request a jury, then they will try and settle out of court.
1
The_Shryk Apr 30, 2026 +1
Cyberpunk future here we come!
1
MentalDisintegrat1on Apr 30, 2026 +1
Where's Johnny Silverhand when you need him.
1
ianc1215 May 4, 2026 +1
I'm curious, what would the reasoning be? Are there any other industries or technologies that are immune from accountability from the damage they cause?
1
Warcraft_Fan Apr 29, 2026 +1
Then sue Altman for suppressing the right to sue AI for crimes?
1
alvinofdiaspar Apr 29, 2026 +138
The damning thing isn’t that OpenAI didn’t act and contact the authorities alone - but that they were warned by their own safety team and chose to not only override it, but actively facilitate the users’ continued participation in the system by circumventing the ban. Would I trust the US justice system? Nope. Much better article in Ars Technica: https://arstechnica.com/tech-policy/2026/04/school-shooting-lawsuits-accuse-openai-of-hiding-violent-chatgpt-users/
138
luluhouse7 Apr 29, 2026 +69
> Leaders rejected the safety team’s urgings and declined to report the user to law enforcement. Instead, **OpenAI simply deactivated the account, then quickly followed up to tell the shooter how to get back on ChatGPT to continue planning** by signing up with another email address, the lawsuits alleged. This is the really egregious part and is exactly why the law needs to hold the people who made the decisions accountable instead of just slapping companies with fines they can ignore.
69
Unfair_Web_8275 Apr 29, 2026 +165
I have no idea how such a lawsuit would work, but if these companies are going to replace human interactions and also sell themselves to law enforcement agencies, there are questions that need to be asked.
165
MikuEmpowered Apr 29, 2026 +46
The system is designed to engage the user. The lack of safety feature when they know the possibility, because they have data on it, and choose not to act can be argued as gross negligent. We KNOW they can place limititation on content generated because they have been placing safety rails. The real challenge here is to prove that they know about the risk and choose to ignore it.
46
Unfair_Web_8275 Apr 29, 2026 +9
I 100% agree, here's hoping that OpenAI (and other companies) have to answer some questions.
9
[deleted] Apr 29, 2026 +3
[deleted]
3
rockmasterflex Apr 29, 2026 +3
OTOH it can definitely be argued that OpenAI (and any other AI) have a very strong responsibility to have their conversational AI discourage violence in all forms, and not doing so consistently is negligent. So even if ChatGPT didn't egg the shooter on, if it failed to intervene in any meaningful way, including red flagging the person's chats for human review.... That could be construed as negligence.
3
Daren_I Apr 29, 2026 +62
> The lawsuits, filed in federal court in San Francisco, accuse OpenAI leaders of not alerting police because it would have exposed the volume of violence-related conversations on ChatGPT and potentially jeopardized the company’s path to a nearly $1 trillion initial public offering. It's staggering to think that even if each life lost was awarded $1 billion each, that's just a bad fiscal quarter for the company.
62
Holmes419 Apr 29, 2026 +30
They produce about $2 billion in revenue monthly and are still expected to have net loses around $14 billion this year I believe. They are targeting 2029/30 for profitability. They exist solely from investment funding rounds and major law changes can jeopardize that overnight, so expect them to send everything and the kitchen sink at these and any similar lawsuits that pop up.  Some lawyers (and probably politicians) are going to be getting some really nice new boats out of this.
30
TwoPoundzaSausage Apr 30, 2026 +3
> They are targeting 2029/30 for profitability. That's a long time to wait knowing that the public is souring on AI with each passing day.
3
Thief_of_Sanity Apr 30, 2026 +2
Yeah I hope they fail
2
alvinofdiaspar Apr 29, 2026 +3
Oh they will just have a chat with the current POTUS and a deal will be made.
3
maydock Apr 29, 2026 +10
let’s see the chat logs
10
CP_Chronicler Apr 29, 2026 +24
They literally charged the firebomber of Sam Altman’s house with attempted murder even though he only hit a gate. If that is the result, then Altman should absolutely be charged with murder here and sent to prison.
24
MrBahhum Apr 29, 2026 +10
AI psychosis is immeasurable. All data centers are resource sinks.
10
Thisfuggenguy Apr 29, 2026 +5
The same sam who tried to r*** his own sister?
5
Cynical_Classicist Apr 29, 2026
That man really is a monster.
0
cyclemonster Apr 29, 2026 -31
This is bad and tragic, of course, but why would OpenAI have an affirmative obligation to do those things in the absence of a law that compels them to? If some stranger on the bus confesses to me that they want to do a mass shooting and I keep that to myself and then that person actually follows through, I don't owe any victims any damages for that.
-31
Lurkerbot47 Apr 29, 2026 +22
Your comparison is a bad one. The issue here is that chatbots play an active role in aiding the perpetrators either by giving direct guidance or validating their current mental state. Using your own example, it would be like if that stranger repeatedly told you they plan to shoot and asking for your support and advice.
22
whatyousay69 Apr 29, 2026 -5
>The issue here is that chatbots play an active role in aiding the perpetrators either by giving direct guidance or validating their current mental state.  Isn't the lawsuit, per article, specifically for failure to notify the police? I don't see any claim of OpenAI aiding the perpetrator being brought up.
-5
Lurkerbot47 Apr 29, 2026 +3
That is likely the logic behind why failure to notify is the cause for the lawsuit. If not, what would they be suing for?
3
cyclemonster Apr 29, 2026 -16
They kicked her off the platform more than six months before the shooting!
-16
-ohhhman- Apr 29, 2026 +23
And after being warned by the security team, OpenAI decided to reinstate the user.
23
ebulient Apr 29, 2026 +6
Corporations vs an individual - you don’t have the same civil responsibilities
6
cyclemonster Apr 29, 2026 -7
Where can I find a list of what these "civil responsibilities" are by jurisdiction? If I form a one-person corporation, do I as an individual suddenly gain these responsibilities as a side-effect?
-7
SavathunTechQuestion Apr 29, 2026 +11
> If I form a one-person corporation Assuming you’re not trolling in the US most people form LLCs to protect themselves for being sued personally for mistakes the companies make. If you were starting a business, you would have to look at corporate negligence, and see what kind of stuff would be applicable. For example, McDonald’s, hot coffee, which gave someone third-degree burns and was downplayed and twisted in the media as someone being silly is an example of corporate negligence. They had multiple reports that it was too hot and giving people severe burns.
11
cyclemonster Apr 29, 2026
Nobody who saw her burns and the skin grafts she required thought of that case as being frivolous.
0
SavathunTechQuestion Apr 29, 2026 +6
But if you were in charge of running the bus, and your bus security team flagged someone who had been making remarks that the were going to do a mass shooting, but you over-ruled the security team and let the person get a new bus account after being banned. Then that’s negligence of what a reasonable professional would do at the least,  the ars technica article goes into it better how the company mishandled this https://arstechnica.com/tech-policy/2026/04/school-shooting-lawsuits-accuse-openai-of-hiding-violent-chatgpt-users/
6
cyclemonster Apr 29, 2026 -13
Negligence requires a duty of care; a doctor has a duty of care towards their patients, while random internet apps do not have a duty of care towards their users. A difference of opinions with regards to a policy tradeoff is almost certainly not negligence.
-13
alvinofdiaspar Apr 29, 2026 +8
You try that argument with trafficking, CSAM, etc.
8
cyclemonster Apr 29, 2026 -1
[There is a law that creates an obligation on random internet apps to report CSAM](https://laws-lois.justice.gc.ca/eng/acts/i-20.7/page-1.html). There is no obligation in the absence of such a law.
-1
Smooth_Storm_9698 Apr 29, 2026
Huh... wrong CEO, huh
0
← Back to Board