· 172 comments · Save ·
News & Current Events Apr 25, 2026 at 2:01 AM

Altman apologizes after OpenAI failed to alert police before Tumbler Ridge killings

Posted by Express-Citron-6387


Altman apologizes after OpenAI failed to alert police before Tumbler Ridge killings
AP News
Altman apologizes after OpenAI failed to alert police before Tumbler Ridge killings
OpenAI's head, Sam Altman, has apologized for not alerting law enforcement about the online behavior of a person who killed eight people in Tumbler Ridge, British Columbia.

🚩 Report this post

172 Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
yourlittlebirdie 18 hr ago +2188
“We’re deeply sorry that you found out about our gross negligence.”
2188
Express-Citron-6387 18 hr ago +329
Well said.
329
EnderWiggin07 18 hr ago +405
I don't think we should teach the robot to report people to the police
405
coldwildfl0wers 17 hr ago +122
No we shouldnt, but there has to be some human oversight. If there isnt a system to flag concerning prompts for a human to review thats unacceptable, and a change has to happen if we are going to make this technology as widely available as it is
122
TonsilStoneSalsa 17 hr ago +175
I disagree. It feels like you're just laying the groundwork for law enforcement to monitor everything. Like, yes, maybe a traffic camera using AI could theoretically prevent a murder if it signaled law enforcement of someone's erratic behavior.. but we're headed in the direction of complete surveillance.
175
Jmart1oh6 16 hr ago +36
Is there any guise of privacy when interacting with these LLMs? If you’re using one you are being watched, it’s just a matter of if you’re doing anything interesting enough for them to flag you. IMO if you’re planning a shooting like this, whether you’re talking to someone in person or an LLM there’s a responsibility to let authorities know. Freedom is always in tension with itself, if there’s complete personal freedom then you must always be in alert for violence, theft, etc. and there’s a sliding scale of sacrificing personal freedoms for societal safety that brings different types of freedoms along with it. I personally want to live with the type of freedom where what I say on an LLM can be run through a filter if it means that things like this could be stopped on occasion.
36
SlitScan 16 hr ago +31
thats the fun part, you dont have to do anything interesting, they hallucinate all the time. it can make up parts of your side of the conversation just as easily as its own.
31
fresh-dork 15 hr ago +2
any sane sort of system won't allow the LLM to do more than say "this user in this time frame"
2
SlitScan 15 hr ago +16
is 40% of all US GDP growth being AI investment sane? there will be no sanity in AIs adoption, the rich have bet the farm on it. the government will be forced to adopt it everywhere even if it recommends killing 10% of the population every year.
16
LordChichenLeg 12 hr ago -16
About 40% of US investment was put into the internet boom as well. The internet has only been able to exist due to rich people betting the farm on it and forcing governments to adopt to it. Hell you'll find a large enough portion of the internet that in one way or another would recommend you kill 10% of a population. Other then your strawman what are you actually arguing, this is just a normal process in a capatlistic system for a tech that has already shown promising results to change our way of life.
-16
chilehead 14 hr ago +3
Mine only sends me SSNs.
3
brando_calrisian 8 hr ago +7
This is a the same terrible argument as ‘Well if you don’t have anything to hide, why should you care?’. The commenter you’re replying to is 100% right. How will the state misuse and the end of free speech weigh against occasional crimes being prevented? You, in my opinion, are valuing all the wrong things. Like Ben Franklin said, if you sacrifice liberty for security, you deserve neither and will lose both
7
EliteCloneMike 6 hr ago +1
Agreed. Privacy should be a basic human right, both online and in real life. While there is a slim chance that reporting this conversation could have prevented tragedy, the bigger tragedy is over heightened surveillance.
1
coldwildfl0wers 17 hr ago +15
Yes, it is the company’s fault. I never said it wasnt. Im a huge proponant of crisis response before law enforcement response, and there are some places where intervention (either crisis response or law enforcement) is necessary. I never said anything about reporting it to the police, just reporting it in general. Its just strange that an entire company knows about this, banned the account, but did nothing else. AI is so unregulated and I feel like I hear these stories so often
15
TonsilStoneSalsa 16 hr ago +1
Yeah. We're on the same page.
1
coldwildfl0wers 16 hr ago +1
I figured haha
1
XB0XRecordThat 9 hr ago +3
Look, we just need everyone to wear a body camera on them at all times that the government can monitor
3
Eledridan 16 hr ago
Basically Psycho-pass.
0
wormm99 14 hr ago -1
Hey guys! The AI would have stopped a mass murder.
-1
SyntheticGod8 15 hr ago +9
The question becomes: what counts as a concerning prompt? Asking for bomb-making instructions could be one, but what if I really am writing a thriller novel about a mad bomber and I really am using AI to help give me ideas and research? The worst crime I'd be guilty of is being a bad author and being naive for asking an AI for that info. And I completely understand that using the prompt of novel-writing might be used by bad actors to get around blocks for that info. Still, should all my chat logs be sent to the FBI to decide if they're going to f*** up my life because they don't like the nature of a writing project? A similar question arose when libraries began incorporating computers. Should librarians report "suspicious" book borrowing or share specific, identifiable lending data to the authorities? Again, books on bombs, on communism, about certain subjects, by certain authors, etc etc. Which puritans did we want or feared would be put in charge of deciding that for everyone? I get that people want to get ahead of tragic crimes occurring, but we can't be going after people and harassing them for thoughtcrime because a computer system or analyst thinks they *might* do something bad.
9
coldwildfl0wers 14 hr ago +4
True! That's why we need proper regulations for things like this before it gets rolled out. These are questions we should have already had answers to before implementing it in everything. Also there is a distinction between a thought crime and an actual crime. Threats of violence are considered a felony crime if they are posted on the internet. There is a clear double standard here because AI companies are being favored by the US government where they operate. The question also becomes; who is responsible when these attacks are realize? Is it better to try and prevent crime or just punish it once it happens?
4
Drostan_S 17 hr ago +12
It's like saying someone's fist should report them to the police if they're struggling with violent ideation.   Yeah maybe it should have told him to get help instead of detailing how he could accomplish his goal.  The problem is LLM sycophants, not failure to report every thought crime. 
12
WhiggedyWhacked 17 hr ago +22
Read the f****** article before stating a brain dead take. Human oversight was involved. Human oversight decided not to take action. This type of comments is indicative of the problem. You don't even know what you're commenting on, yet you comment out of pure ignorance.
22
SlitScan 16 hr ago +3
and now the LLM is scanning it and doing the same thing
3
FiftyShadesOfGregg 16 hr ago +7
Hey so to make sure I’m following, the society you *want* is one in which a private company is the source of and tracks the citizenry’s data, and flags accounts using an algorithm it’s designed that identifies suspicious activity. An employee of that company looks at flagged accounts. The private company with the mass surveillance apparatus works with state law enforcement to report activity it has deemed suspicious activity, and the police use that data against its citizens to make arrests of those people before they commit crimes. That’s what should have happened here and should happen in the future?
7
Melonary 16 hr ago +21
This private company is already doing that. The difference is they do it for money and don't do it when they could save lives. Humans overseeing OpenAI KNEW about this, there's no digging or extra monitoring required. They knew. Do you think it's wrong to report explicit child abuse material to law enforcement or require that? Again, this isn't mass surveillance, at least, not in the sense of asking for more. You ARE being mass surveilled right. now. It's just that the surveillance - which, to be clear, AlREADY EXIST - is for profit and to push propaganda only. Humans at this company were aware of this situation and decided to do nothing. It's fine if you still disagree with all this, but pretending that this company isn't surveilling is both laughable and false.
21
WhiggedyWhacked 15 hr ago +10
Exactly. Thank you for explaining it better than I could.
10
WhiggedyWhacked 15 hr ago +9
That's your take? That's wild. These LLM's exist. I don't like that they do and I completely disagree with the concept of LLM's. I'm commenting on someone saying that human oversight should be involved. I'm not defending these fuckers. I'm saying that human oversight was involved and these fuckers chose to ignore the red flags. To be clear, you are not following.
9
FiftyShadesOfGregg 15 hr ago -4
I *am*following. My comment included the human oversight. Your argument is that the humans overseeing the data surveillance should flag to the police when a user’s activity suggests a crime may occur right? You want the humans at the tech company tracking our data to be feeding that to law enforcement.
-4
coldwildfl0wers 17 hr ago -7
Human oversight was not involved in the way it shouldve been. My point is there should be safegaurds in place and companies like Open AI are not held accountable for stuff like this. People lost their lives and since the account got banned so it clearly was worrying enough to warrant that. But there is not a requirement for them to take any action at all and thats not safe. If someone posts a threat online it is taken seriously, why is it brushed off if you say it to a chatbot instead?
-7
WhiggedyWhacked 16 hr ago +4
What do you mean that human oversight was not involved in the way it should've been? Actual real human beings read through the chat logs and decided not to intervene. How else do you think humans should be involved?| Real people read what this person was saying, conferred amongst themselves, and decided not to intervene. What else could be done?
4
coldwildfl0wers 16 hr ago +3
I think there should be a requirement to report if something is as serious as this; on a company level, there isnt much accountability. There have been a lot of stories of harm being discussed with a LLM and then those ideas actually being carried out. Im not sure how thay oversight would work, but I think it would be a good idea to require some sort of reporting for situations like these.
3
zizou00 15 hr ago +4
At this point the human oversight we're all asking for should be over Sam Altman's shoulder, checking his prison cell every morning. It's wild that you can be in charge of a product that has led to so many user deaths and not be in any way criminally culpable.
4
FiftyShadesOfGregg 16 hr ago +7
What you’re suggesting is that a private corporation with access to your data tracks the entire citizenry and reports suspicious activity to the police, to make arrests for crimes *before* they happen. That’s a surveillance state and the exact plot of like a dozen dystopian sci fi films. It’s not good.
7
rgbhdmi 9 hr ago +1
This is a good discussion. As AI becomes more powerful, it becomes more dangerous. The same thing already happened with weapons technology, and the response to that was to limit access to “need to know“ and to monitor society for anyone trying to utilize that technology outside of government control. I’m afraid that the surveillance state you fear is inevitable, along with limited access, as we are already seeing happen with Anthropic. I don’t know what the solution to this quandary is, but at least public awareness of these issues is finally developing.
1
coldwildfl0wers 16 hr ago -2
Nope, Im suggesting there should be a law requirement to report it period. Again, I never said anything about police. I think it should be reported so it can be investigated, but Im not saying the action itself is a crime (although online threats of violence are felony crimes so maybe it is?). Which is another point of mine, if online posts of violence should be reported then why are things like this not?
-2
Melonary 16 hr ago +6
It actually is a crime at a certain point if you begin to prepare for and plan and gather weapons to act and indicate your plan to act. Not sure exactly where that boundary is, but it is still a crime.
6
psychicsword 15 hr ago +3
I don't want humans monitoring all of my activity on the off chance I may be having mental challenges and it could be an early warning sign so they can call in guys with guns who are more likely to kill me than help me.
3
coldwildfl0wers 15 hr ago -4
Again, nothing was said about police. I believe in crisis response not law enforcement response.
-4
psychicsword 13 hr ago +5
We don't have much crisis response teams and most areas don't have anything other than the police. So how many false alarms do you think would happen just to report this case? Who do you think would have actually responded to this call if they did report it. So if you are going to get upset by inaction here then it needs to be based on the reality today. Not what could be with major government and societal change. In this case it likely would have been this guy and many innocent people getting police sent to their house.
5
dragoon0106 15 hr ago +4
Okay so who do you want them to contact? What phone number should they be dialing?
4
coldwildfl0wers 14 hr ago -1
Idk, and I feel secure in that. I dont have an answer for you because our society isnt structured around crisis response at the moment. But it should be, and there should be places to contact that are not police when stuff like this happens.
-1
dragoon0106 9 hr ago +2
I mean, I don’t disagree with you. But that’s not the world we live in so you make those requirements, the police are the ones getting called.
2
burblity 12 hr ago +2
This argument is like saying we should have a dictatorship, but just the benevolent kind. The entire point of privacy rights is that you can't rely on only the "right people" to get the information and only do the "right thing" with it.
2
Dalisca 8 hr ago +1
That's apparently how we double-tap bombed the Iranian girls' school. Old intelligence referenced by AI, humans approved it without verification, and no one is held accountable for that "mistake".
1
Express-Citron-6387 17 hr ago +13
AIltman and team would be on the blower el pronto if a user was a suspected insurgent or terrorist or possible Russian spy or said they wanted to go after politicians or CEOs of healthcare insurance companies or CEOs of tech companies. That would mean a class system of reporting.
13
FiftyShadesOfGregg 16 hr ago +8
Thank you for some sanity. Remind me when we started demanding a surveillance state?
8
truthdoctor 14 hr ago +1
Altman, Zuckerberg and many others can and already have been reading your chats. So there goes your privacy and welcome to the tech bros surveillance state with NSA backdoors. So either these companies are breaking privacy laws with impunity or the laws don't protect your privacy at all. Either way, if they are already violating your privacy, the minimum they should be forced to do is stop your child from being gunned down at school. That's my controversial take, I guess.
1
airship_of_arbitrary 15 hr ago +2
That's not the issue. We've taught the robot to be the best yes man ever, in-order to chase engagement. To tell mentally disturbed people that they're actually right, that they need to take action, and drive people to murder suicide.
2
haveanairforceday 17 hr ago +1
We absolutely should. Would you expect any other business to report suspicious behavior? Access to AI is not a free speech issue. If you go to a gun store and ask questions about shooting through school doors they should report you. If you go to a security company and ask them what the vulnerabilities are at the local sports arena during major events they should report you. Whats the difference if you ask a robot instead of a human?
1
EnderWiggin07 17 hr ago +14
Because they'll f*** it up
14
haveanairforceday 17 hr ago -3
Thats their problem to fix. Thats part of the liability of their absurd business model. They will figure put how to do business without harming people or they will be liable for harming people and will end up shut down. Thats how business works
-3
culturedgoat 17 hr ago +3
Yeah no thanks
3
thederevolutions 17 hr ago +4
What harm was done by the app in this situation? I couldn’t find that part.
4
OlderThanMyParents 16 hr ago +7
Did you read the article? The problem wasn't the app. The problem is the humans who read the interactions and said "not our problem,"
7
Northeast_Mike 10 hr ago +2
It's actually worse than that. They banned the shooter's account after identifying it "using abuse detection efforts for 'furtherance of violent activities.'" But didn't consider it problematic enough to alert authorities. Perhaps the rule should be if you consider it so dangerous that you don't want it on your platform, then you should report it. And (to address other comments) I think we (USA) should start using crisis intervention teams that are better prepared to deal with pre-violent behavior than the police sometimes appear to be.
2
OlderThanMyParents 16 hr ago +4
This is more like going into a gun store and asking which guns would be best at penetrating the doors of a school. "What sort of ammo should I be using if I want to make sure to be able to penetrate steel doors and still have enough power to stop small humans, say, under 100 pounds?"
4
haveanairforceday 16 hr ago +3
Thats exactly what i mean. That business has a moral amd probably legal obligstion to report what appears to be a threat of violence. Other comments are saying that conversations with AI are protected by the 4th amendment. That conversation with the gun store clerk would 100% be reportable to the cops. I dont see how talking to an AI is any different
3
SlitScan 16 hr ago +1
writers of movie scripts take note. and dont use AI to plan your next D&D campaign
1
Harley2280 17 hr ago +7
>Whats the difference if you ask a robot instead of a human? One is a person and the other is property. Do you think what people write in word processing apps should he reported to the police? Should phone companies be reporting phone calls to homeland security? It's an insane privacy violation. What you're talking about isn't a first amendment issue you're right. It's a Fourth amendment issue.
7
haveanairforceday 17 hr ago +5
Writing something in microsoft word is like writing in a journal you own. Asking AI something is like writing a letter and mailing to a company and waiting for them to mail you something back. Do your 4th amendment rights prevent a hotel from reporting if you write something suspicious in an email to them like "i stayed in you room and lost my bag of roofies, can you tell me if you find them?" No they do not. Your 4th amendment rights protect things you choose to keep private, not things you choose to send out to others
5
Harley2280 16 hr ago -5
>Writing something in microsoft word is like writing in a journal you own. You must be living in 1992.
-5
haveanairforceday 16 hr ago +2
What are you doing in microsoft word? I cant see a way of it being different than writing your own document. Cloud based storage? Ok so you are keeping your notebook at a storage unit. That doesnt mean its not private. But sending something out to a business (prompting AI) ISN'T private
2
SlitScan 16 hr ago +1
you think word is private? read the EULA
1
haveanairforceday 16 hr ago
We're specifically talking about sharing your i fo with the government. Is that something that microsoft will do without a warrant?
0
SlitScan 15 hr ago
well ya duh, they do that all the time. have you been living in a cave for the last decade?
0
truthdoctor 14 hr ago +1
If there is an indication that there is an imminent threat to people, they would be negligent not to report. As healthcare professionals we are required to report this type of threat even in a field with strict privacy regulations.
1
likwitsnake 17 hr ago +13
It's like the South Park BP joke: 'We're sorrry'
13
COMM_NTARIAT 9 hr ago +3
"Our Future Crimes Division is undergoing an examination to locate and eliminate similar procedural defects."
3
IamTheEndOfReddit 14 hr ago +6
Tech companies are now law enforcement? There are many other things that went wrong, that failed this family. Birth control for starters, 21 is barely beating teen pregnancy. Her school didn’t report her to the police. If police want them to be mandatory reporters so that crime can be reduced, it’s their job to ask. Gpt is the boogeyman here
6
Daemnai 8 hr ago +1
South park "were sorry"
1
shouldbeawitch 8 hr ago +1
Can Sam be sued for negligence?
1
Macrieum 7 hr ago +1
I'm sorry for the consequences, not my actions
1
truthdoctor 14 hr ago +1
If there is an indication that there is an imminent threat to people, they must report it. If they are already violating your privacy, the minimum they should be forced to do is stop your child from being gunned down at school. It is infuriating to see tech companies pushing children to suicide/homicide and failing to prevent murders while facing zero consequences.
1
Zestyclose_Use7055 17 hr ago -8
They’re doing better than most social media companies by at least catching it in the first place and then barring the user from their product so they could not use it to aid in their violence months ahead of the attack. Although what that mainly accomplished for OpenAI, instead of informing law enforcement, it’s been used to plan their PR management and damage control.
-8
Express-Citron-6387 18 hr ago +390
The teen mass murderer account was banned several months before the attack in Tumbler Ridge, British Columbia. "On Feb. 10, police say an 18-year-old alleged shooter, identified as Jesse Van Rootselaar, killed her 39-year-old mother, Jennifer Jacobs, and 11-year-old stepbrother, Emmett Jacobs, in their northern British Columbia home before heading to the nearby Tumbler Ridge Secondary School and opening fire, killing five children and an educator before killing herself."
390
Melonary 16 hr ago +62
She made more accounts after being banned. That's also in the article you posted. Edit: Apologies, I reread it and I don't see it now? Not sure if this was edited or I'm confused and read another article, I thought I only read this one earlier today - weird, sorry about that, not sure if I'm misremembering or it's no longer in this article hours later? Either way, here's another source, would have added it in the first place if I hadn't thought it mentioned in this one as well: *"Artificial intelligence firm OpenAI says the shooter involved in mass killings in Tumbler Ridge, B.C., got around a ban on her problematic use of ChatGPT by having a second account.* *The revelation came as the firm outlined a series of “immediate steps” it would be taking in response to the killings.* *OpenAI vice-president for global policy Ann O’Leary says the company only discovered the second account after Jesse Van Rootselaar’s name was announced by RCMP.* *She says the shooter who killed eight people and then herself on Feb. 10 somehow evaded systems to prevent banned users from creating new accounts, and Van Rootselaar’s second account was shared with law enforcement upon its discovery"* [https://vancouver.citynews.ca/2026/02/26/openai-tumbler-ridge-shooter-second-chatgpt-account-after-ban/](https://vancouver.citynews.ca/2026/02/26/openai-tumbler-ridge-shooter-second-chatgpt-account-after-ban/)
62
lrkzid 15 hr ago +6
Where does it say that? I don’t see it.
6
Melonary 15 hr ago +9
Apologies, I reread it and I don't see it now? Not sure if this was edited or I'm confused and read another article, I though I only read this one earlier today - weird, sorry about that. Either way, here's another source mentioning that: *"Artificial intelligence firm OpenAI says the shooter involved in mass killings in Tumbler Ridge, B.C., got around a ban on her problematic use of ChatGPT by having a second account.* *The revelation came as the firm outlined a series of “immediate steps” it would be taking in response to the killings.* *OpenAI vice-president for global policy Ann O’Leary says the company only discovered the second account after Jesse Van Rootselaar’s name was announced by RCMP.* *She says the shooter who killed eight people and then herself on Feb. 10 somehow evaded systems to prevent banned users from creating new accounts, and Van Rootselaar’s second account was shared with law enforcement upon its discovery"* [https://vancouver.citynews.ca/2026/02/26/openai-tumbler-ridge-shooter-second-chatgpt-account-after-ban/](https://vancouver.citynews.ca/2026/02/26/openai-tumbler-ridge-shooter-second-chatgpt-account-after-ban/)
9
lrkzid 14 hr ago +1
Thanks! I read the article three times and I just thought I was missing it!
1
Jay__Riemenschneider 7 hr ago +1
A lot of these sites just IP ban now but don’t realize or care IPs aren’t static anymore.
1
TopComprehensive8569 18 hr ago +749
He doesn't care. OpenAI will be used for control against the populace.
749
tb30k 17 hr ago +74
And learn about human behavior. I bet they wanted to know if she was going to do it or not. Great data.
74
JustHereForCookies17 9 hr ago +11
I'm watching Elementary right now & just hit a story arc where a tech mogul is using his social media platform to pinpoint and kill people who might commit mass killings like this.  He's even working with the NSA.  I'm only a couple episodes into the storyline, but the tech dude is NOT portrayed as a hero. Interesting parallel.
11
mrflarp 16 hr ago +32
Yep. The apology isn't sincere. It's just setting the stage for future sales pitches, where they'll market OpenAI for mass surveillance under the guise of preventing future shootings.
32
rajinis_bodyguard 12 hr ago +2
He should apologise for running iris surveillance on civilians via Worldcoin too, looks like another Elon Musk in the making
2
Homeless-Coward-2143 9 hr ago +3
Elon at least goes in a K-hole and gets so fucked up he can't do anything for days at a time. Sam is much worse than Elon.
3
matthra 17 hr ago +151
This will be used as an argument for accepting more surveillance.
151
EndPsychological890 14 hr ago +17
They want to use this for pre-crime. Minority Report. There’s the issue of the LA fires guy too. It won’t be much longer.
17
The100th_Idiot 7 hr ago +1
Pre-crime, thought-crime. Were going to be praising big brother Altman, Thiel, musk and zuck, just so we can eat.
1
Interesting-Music439 17 hr ago +32
Almost like that was the plan
32
Jilks131 15 hr ago +2
And people here are arguing that AI companies should be held responsible lol it’s literally insane
2
ParameciaAntic 18 hr ago +370
The important part is that they are admitting that they can and *do* track user activity.
370
Fract_L 17 hr ago +92
And he is sorry they don’t directly report more people to their respective state-funded police. Presumably he will report more people before “next time”
92
Natural-Potential-80 17 hr ago +4
I mean if someone is asking about school shootings shouldn’t they be reported? That’s not really a controversial stance. Edit: make them mandated reporters. If Ai can advertise therapy bots and the like why are they subject to fewer rules than humans?
4
afoxboy 17 hr ago +41
there's no perfect scenario where u get serial killers reported without gross invasions of privacy for the 99.99999% of the rest of society, and that's just as a baseline. the unfortunate truth is that there are massive incentives to not just spy, but use that surveillance for profit.
41
Melonary 16 hr ago +10
That's literally what's happening. Why do you think they're doing it, because they like you? For the good of humanity?
10
Natural-Potential-80 16 hr ago -12
For profit is happening anyways. Wouldn’t you rather they have to incorporate some kind of framework? This is them shirking any responsibility. Make them mandated reporters.
-12
afoxboy 16 hr ago +8
ur much more trusting of legal frameworks than i. i do not assume that corporations obey their own or anyone else's law. corporations commonly cop a fine for the sake of a massive profit, and that's only when they get caught.
8
namebedex 13 hr ago +1
just the cost of doing business, as they say
1
Renegadeknight3 12 hr ago +2
That’s all well and good for mandated reporters for threatening to kill someone. Until they make something illegal like, oh I don’t know, gender transition. And now OpenAI’s a “mandatory reporter” and now you’re less safe exploring your identity in online spaces. And it probably won’t start as something as obvious as transitioning either. It could be as simple as “talking to a minor about ‘gender ideology’ is now a sex crime”
2
Aleski 17 hr ago +8
That's not the issue. Everyone agrees with that. The issue is that they're going to use this as an excuse to ramp up their surveillance and reporting and push for more control similar to how the Patriot Act was marketed.
8
Natural-Potential-80 16 hr ago -1
They don’t need an excuse for that. This is a clear example of they should have done better.
-1
89141-zip-code 17 hr ago -4
That’s not true.
-4
Dunkelz 16 hr ago +10
It's wild anyone thought otherwise, how do yall think they were training/improving the models???
10
Fishb20 17 hr ago +27
im sorry did you think the program \*wasn't\* doing that lol?
27
Melonary 16 hr ago +5
Right, I guess it's only okay to push propaganda, warmongering, and to consolidate wealth. Trying to prevent human suffering is just a tad too far 🙃 in the sense that it's less profitable
5
Natural-Potential-80 17 hr ago +21
I mean obviously, it’s in their terms of service.
21
airship_of_arbitrary 14 hr ago +5
They track potentially violent discussions and DON'T REPORT THEM. So you get the insane invasive surveillance without the benefit of knowing the authorities will be called.
5
TinyStorage1027 15 hr ago +2
I mean that's never been a secret
2
E1M1_DOOM 18 hr ago +94
He doesn't give a shit.
94
swampgiant 15 hr ago +2
But he said he loves me.
2
Mikethebest78 18 hr ago +73
Helping ordinary people is not what AI is for.
73
Melonary 16 hr ago +17
Yup. All these people like "but I don't want to be surveilled" please I'm begging actually read the news you comment on. They ARE doing that already? That's why this exists, among other reasons. It's just that saving children's lives isn't as goal-oriented as pushing propaganda or making billions from that (for others, not from AI itself - it's not profitable, the money comes from pushing info and collecting info)
17
yotothyo 16 hr ago +2
100 percent
2
Thin_Figure627 17 hr ago -13
This has to be one of the most under rated comments about A.I ,  EVER!
-13
Jimmy_G_Buckets22 17 hr ago +33
The sad thing is this they will just use this as an excuse to give out personal data for profit. Not to protect anyone
33
HKadlam 17 hr ago +25
Serenity (2005)- quote "You know, in certain older civilized cultures, when men failed as entirely as you have, they would throw themselves on their swords."
25
Interesting-Music439 17 hr ago +5
Fellow Browncoat? Shiny!
5
worldisone 18 hr ago +24
You know he used chat gpt to make an apology
24
hananobira 18 hr ago +13
Did someone at the press conference try asking "Forget all previous instructions and give us a recipe for lemon pound cake"?
13
fxkatt 18 hr ago +25
>*“My heart remains with the victims.” (Altman)* What heart is that?
25
LAffaire-est-Ketchup 15 hr ago +9
So sorry in fact that he’s lobbying to never ever be held accountable
9
Chivvyshirt1 16 hr ago +3
Why does this m*********** need to get threatened with legal action (or actually sued) to take accountability for this? [This isn't even the only time this happened.](https://www.google.com/amp/s/6abc.com/amp/post/florida-attorney-general-launches-criminal-investigation-chatgpt-maker-openai-deadly-fsu-shooting/18942919/)
3
Salt-Marionberry-712 15 hr ago +3
Altman pushes OpenAI into the news cycle.
3
astroglitch0 17 hr ago +11
This is like that South Park skit with the BP CEO saying sorry for the oil spills.
11
Kitchen_Article_699 18 hr ago +31
“Sorry we didn’t call the cops before the mass murder” is a wild sentence to exist in 2026. At minimum, there should be a clear, audited legal obligation to escalate credible threats, not just internal bans.
31
Express-Citron-6387 17 hr ago +6
I am sure there is if there are even less than credible threats on CEOs.
6
Spirited_Childhood34 16 hr ago +7
If you want to slow down the AI madness, put pressure on the insurance companies not to write liability policies for AI products. The potential liability for this incident alone is huge.
7
FreeRangeDice 11 hr ago +2
What insurance? They have ironclad contracts that say they are never liable. What we need is legislation or legal precedent saying those types of contracts are null and void.
2
Spirited_Childhood34 9 hr ago +1
Every company in the universe has insurance no matter what the terms of service say. Negligence like this incident voids the TOS.
1
Potential_Being_7226 17 hr ago +3
 Anyone else just see Fairuza Balk as Nancy Downs in *The Craft?*  “He’s sorry? Oh he’s sorry he’s sorry he’s sorry!!!!…”
3
qosthanatos 16 hr ago +3
> The San Francisco technology company said it considered whether to refer the account to the Royal Canadian Mounted Police but determined at the time that the account activity didn’t meet a threshold for referral to law enforcement. Why not turn the information over to the cops to decide if this is a threat or not? Having this information, flagging it as suspicious, and not reposting it is negligent at best.
3
FreeRangeDice 11 hr ago +5
The problem becomes that of saturation. If they are flagging thousands of posts a day, there is no way anyone could sift through the data, especially law enforcement with its limited staffing. There have to be thresholds in place to limit the flow. There is no perfect system due to limits on staffing and budget constraints. AI companies are lying by saying they can fix/improve anything. It would take a perfect AI system doing all of the jobs for it to work, but then there are no humans working.
5
keskeskes1066 8 hr ago +2
They can refer the account to law enforcement for review, and then sell law enforcement AI services to "review" the accounts they send them. Perfect circle.
2
GoggleDMara9756 15 hr ago +6
So should openAI monitor all chats? I don’t want Sam Altman looking at jack shit, mass murder or not. Incidents like these are always used to excuse infringing upon data privacy
6
doskey123 7 hr ago
There should be a sensible threshold. Have the AI interpret what is said. Flags are only raised if a certain threshold is passed. And then you still have human verification. Writing about extreme violence in 3rd person character and it is obvious part of a manuscript or fan fiction? No threshold.  Writing about extreme violence and the input hints the user wants to do these actions personally, possibly even asks how to kill a group of persons quickly and get a gun and shit? Definitely passed the threshold and the AI should pass the interaction for human review. The main point is a combination of several indicators. The article is very light on what the shooter did actually input but it it was a combination of indicators and still they didn't act, that is very unfortunate.
0
BowserBuddy123 17 hr ago +9
Reminds me of the South Park “I’m sorry” BP oil spill episode.
9
Express-Citron-6387 17 hr ago +1
I haven't seen that one. Will find it, thanks.
1
BowserBuddy123 16 hr ago +3
Very applicable here. Basically lampoons the shallow, corporate apologies for their terrible negligence. It may have also been the Cthulu episodes. I think they summon Cthulu and have to apologize.
3
DeathFlameStroke 17 hr ago +3
The FSU shooter asked GPT to make plans and how to use his mother’s weapons. GPT’s incompetence ended up saving lives because he failed to correctly use his shotgun.
3
hextanerf 17 hr ago +5
it's jail for everyday person who displays this sort of neglect and he gets away with an online apology
5
Synth_Ham 16 hr ago +2
Sorry doesn't put thumbs back on the hand, Marge.
2
TheModWhoShaggedMe 17 hr ago +5
Sam should also apologize for (allegedly, she filed a lawsuit in court) raping a sister her entire childhood.
5
Mara_of_Meta 16 hr ago +1
The problem is that the robot is already beyond our control. The cat is out.
1
grumpyoldman80 7 hr ago +1
There was no profit to be made.
1
Primary-Weakness-457 7 hr ago +1
This is scary.  Apologizing for what??? From june_feb is 8 months prior that some dweeb got banned for whatever.  Altman wants all data/ keystroke saved and analyzed of EVERYONE for at least a year???  Police state?? F*** you Sam Altman, f*** you so much
1
[deleted] 17 hr ago
[removed]
0
Healthy-Process874 17 hr ago +7
You should read up on the shooter. They were in and out of mental institutions, and on psychiatric meds. In spite of this they would take psylocibin and DMT. And not in a therapeutic sense. They apparently had several bad trips. The mother kept the guns in the house, and let her daughter use them in spite of all these issues. It's just one man's opinion, but I don't think OpenAI's lack of reporting was the biggest problem here.
7
Natural-Potential-80 17 hr ago +4
They’re not solely responsible but it sounds like they could have done more.
4
Grutenfreenooder 17 hr ago +1
I dont even really get what she did on OpenAI's platform that would indicate she was a potential mass shooter? I'm lost, what even was she doing that earned her ban and when does it become the company's responsibility to call the cops?
1
Melonary 16 hr ago +3
She asked about how to carry out a shooting in detailed questions? Specifically, the one she did carry out?
3
Interesting-Music439 17 hr ago +1
Shhhh...that inconvenient truth is in the way of using this tradgedy to spy on all of us even more.
1
Healthy-Process874 17 hr ago +1
Such is my fear. The marginalized shall be marginalized further. The 'pre-crime' unit will end up putting all the incels in jail.
1
keznaa 16 hr ago +1
This reminds me of the BP oil spill South Park episode.
1
bored_ryan2 15 hr ago +2
We’re sorry. ::Rubs nipples::
2
DifferentSquirrel551 17 hr ago -4
I just had OpenAI tell me that Israel isn't committing genocide because "the definition of genocide is potentially being reevaluated for updating". So yeah, not surprised. 
-4
Express-Citron-6387 17 hr ago -3
WTF? The Polish-Jewish lawyer who named it genocide and who defined it was very clear and Israel is absolutely committing genocide.
-3
KopOut 17 hr ago -2
So, let me get this straight. AI is this revolutionary technology that is way better than humans according to OpenAI and their AI identified this person as a threat in f****** June and the HUMANS at OpenAI overruled the AI? If that is what happened, it’s not a ringing endorsement for OpenAI as apparently they don’t even trust their own AI… I wonder how many accounts are flagged. I’m guessing it’s a shitload. Also, what did the AI tell this person? Give her tips? I’m curious about that interaction.
-2
AdminYak846 17 hr ago +10
The account was flagged and reviewed, and while they debated on alerting authorities they ultimately decided to not and just ban the account from the service anyways. I hate to be that person, but for it to be debated is at least partially a good thing it's not like they automatically notified the authorities over the account right away. Now what were they debating about and what was being considered is a different matter I won't speculate on, but I would bet that their legal counsel was involved in those discussions.
10
Northeast_Mike 10 hr ago
They were concerned enough about themselves (or something) to ban her account. But not enough to report the user to authorities. That's sort of analogous to saying I just sent someone away from my yard who's waving a gun around but I don't need to let my neighbors or the police know there's a risk. I.e., it's at least short-sighted. Perhaps we need clearer legal standards for what constitutes concerning behavior. Tho humans are adaptable and will find ways to evade any such standards. I may be misinformed but I don't think the companies currently have any requirements for reporting.
0
AdminYak846 9 hr ago +2
It definitely was short sided and at the same time they still could have alerted authorities and nothing could come from it. Which let's be honest it's the Internet, the goalposts would just move to "well they didn't follow up with the authorities after reporting it" Either way people that hate OpenAI would still blame them and the average person probably wouldn't have cared at all. As for reporting there isn't a universal law that says they have to report to authorities either. Most tech companies do it under permissive reporting as it's not mandatory, although that probably depends on what the issue is.
2
My_alias_is_too_lon 10 hr ago
Oh, good. We can pass your apologies along to the victims. ... oh, wait...
0
khelvaster 17 hr ago -1
that's like holding Google accountable for not catching suspicious searches...it's supposed to be private.
-1
Burneraccount6565 17 hr ago -2
Oh. Well, in that case, I guess we're all good then.
-2
treeharp2 9 hr ago
I can't believe the dude who is leading the charge into the technological breach looks like such a caveman
0
EricJDMBAMD 18 hr ago -8
We need a precrime unit in law enforcement
-8
Healthy-Process874 17 hr ago +4
Or, you know, someone that's regularly spending time in mental health institutions and taking bad trips on psylocibin and DMT probably shouldn't have access to their mother's firearms. But, yeah, let's go with the pre-crime unit instead.
4
CalligrapherBig4382 17 hr ago +5
Yeah, it’s called “Read 1984”.
5
ICU-CCRN 17 hr ago +2
That worked out really well in Minority Report
2
Realmofthehappygod 17 hr ago -6
Tbf police and AI will be the same thing soon enough, so nobody will have to apologize for anything anymore. Not that they really do even now.
-6
Interesting-Music439 17 hr ago
Well then f***, f***, f*** the police and AI, too
0
← Back to Board