Privitise the gains, socialise the losses is their mantra.
394
Waluigi_IRL16 hr ago
+71
Socialism is only bad when it’s for the poors and they’ve convinced 70million+ Americans to agree with them
71
ianc121514 hr ago
+11
Hear me out... Just a crazy idea. What if we told these companies to get all their shit together, put in a box, bag, basket. Just get all their shit together and get fucked.
I think it could be the next big thing for AI, how about you?
11
blueSGL11 hr ago
+4
We should have an AIEA or CERN for AI.
Governments would not stand around if a private company were building unlicensed nuclear power plants
Yet a private company was able to make a [All your exploits are belong to us box.](https://red.anthropic.com/2026/mythos-preview/) that has been verified - [in](https://github.com/califio/publications/blob/main/MADBugs/CVE-2026-4747/write-up.md) - [the](https://www.wolfssl.com/how-claude-mythos-preview-helped-harden-wolfssl/) - [wild](https://ftp.openbsd.org/pub/OpenBSD/patches/7.8/common/025_sack.patch.sig) - [many](https://github.com/FFmpeg/FFmpeg/commit/39e1969303a0b9ec5fb5f5eb643bf7a5b69c0a89) - [many](https://github.com/randombit/botan/security/advisories/GHSA-v782-6fq4-q827) - [many](https://www.cve.org/CVERecord?id=CVE-2026-5588) - [times](https://blog.mozilla.org/en/firefox/ai-security-zero-day-vulnerabilities/) It's real.
AI companies keeps [increasing the amount of actions](https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/) that can be successfully chained together.
AI CEOs and their investors want to own the world economy, [and are engaging in risky behaviors on the chance that they 'win'](https://www.youtube.com/watch?v=mhyD8MuW65M&t=811s) We should not have commercial interests racing for this tech.
4
Yeetstation415 hr ago
+2
Genuinely we should just start nationalizing shit, it worked in 1917.
2
solaramalgama13 hr ago
-8
Did it, though? Holodomor victims probably would not agree.
-8
Yeetstation413 hr ago
+7
Don't be ridiculous, I'm not talking about the Soviet Union.
7
solaramalgama13 hr ago
-1
So what happened in 1917 then?
-1
Yeetstation413 hr ago
+12
[United States Railroad Administration ](https://en.wikipedia.org/wiki/United_States_Railroad_Administration)
12
RichterBelmontCA3 hr ago
+1
OpenAI isn't profitable and it's doubtful if they'll ever be.
1
xynith11617 hr ago
+223
These tech companies want AI to replace humans in the workplace? Make it have full legal liabilities like any other human then.
223
DiaryofTwain15 hr ago
+12
Yes. Trans humanism needs to be considered sooner rather than later as AI merges more and more into our lives. Also I don’t believe in free will so an AI chat bot convincing a person to do something should be a real argument in the court room
12
xynith11614 hr ago
+13
I’d argue that transhumanism should have been codified after we decided that corporations have the same legal rights as people.
13
blueSGL11 hr ago
+4
They can't be reliably controlled which is why AI company lobbyists are hell bent on preventing any and AI regulation. [no matter how reasonable](https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/)
>OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
If they were required to be controllable (or if the blame for them not being under control was on the AI companies) they would not be released.
Control is an unsolved problem, no one knows prior to a new training run what the capabilities/quirks will be:
* [The systems are grown, not coded.](https://www.youtube.com/watch?v=jSUWhZZ4zOQ&t=511s) < wrote the standard textbook on AI
* [We don't know how to get consistent desired goals into them.](https://youtu.be/7fImPlfdRS0?t=2712) < won the Nobel prize for his work in AI
* [and the current training has put goals in there, that we don't want in there.](https://youtu.be/B_HDkqZtGOE?t=2981)
4
xynith11610 hr ago
+6
Exactly. Which is why it’s grossly negligent for these companies to be releasing their AI to the public knowing that they are inherently unsafe.
I don’t actually mean that AI as they exist today should have legal rights (because they’re really just machines). I’m trying to point out that these tech companies are exploiting a legal gray area where AI is sold as the replacement for humans without any of the liability of a traditional product.
6
LittleKitty23517 hr ago
+14
The thing is I'm not sure a human would be found legal liable for failing to report a potential mass shooter based on what ChatGPT did here. Unless that person has a duty to report, which is fairly narrow.
14
AuroraFinem16 hr ago
+64
This wasn’t about failure to report though, it literally walked them through what to do. In a just world OpenAI would be charged with aiding and abetting, conspiracy, or incitement depending on what ChatGPT actually said.
If your buddy messages you on discord about a shooting and you tell them what to do and they do it, you’re getting charged 100%.
64
Typical_Survey929112 hr ago
+7
The Florida Attorney General is investigating and said that if a human said what AI told the killer, it would be an accomplice to murder.
7
AuroraFinem9 hr ago
+2
Wow, surprise Florida W here.
2
LittleKitty23516 hr ago
-8
If you replace ChatGPT with a persons name, I don't see anything specific in the article that is criminal. Some of the things listed, like operating a firearm are actually safety related and not suspicious at all.
Aiding and abetting and conspiracy usually require an overt act and participation in the crime, such as renting a car, buying supplies or helping scout a location. Incitement would mean ChatGPT would have to be suggesting that committing a mass shooting be a good idea and they should act on it. I don't see how a chatbot can incite imminent lawless action, unlike say a speaker at a riot.
I'm all for reigning in AI, but this seems like a reach.
-8
xynith11616 hr ago
+14
There needs to be a legal standard here that if you replace an AI/corporation’s actions with that of an individual, the investigation and charges must be identical. We’ve decided corporations are people, they should have the same liabilities as people.
Not saying in this specific case there is liability to be found, but there needs to be investigation as in any other criminal case.
14
strugglz14 hr ago
+3
We got 2 states trying to limit that. Hawaii in particular I like, wanting to declare corporations artificial persons.
https://bigislandnow.com/2026/05/11/senate-bill-clarifies-that-corporations-artificial-entities-cannot-contribute-to-elections/
3
AuroraFinem9 hr ago
+1
Accessory requires an overt act, neither conspiracy nor aiding and abetting do.
Conspiracy is probably not the right charge though because it generally requires both parties to agree in some way to break the law. However, A&A only requires reasonable suspicion that there is a crime and that you either help, encourage, or incite someone, in any way, to do it.
Not sure if you actually read the article or just skimmed until you found a single sentence, but this is literally from the article:
“ChatGPT inflamed and encouraged Ikner’s delusions; endorsed his view that he was a sane and rational individual; helped convince him that violent acts can be required to bring about change”
It doesn’t list out the entire ChatGPT exchange, why are you assuming only 1 question was asked? The article also literally quotes a question where he asks ChatGPT to walk him through what the what will happen during his arrest and prosecution and what his outlooks looked like afterwards.
How the f*** is that something a normal person wouldn’t have caught after a detailed discussion about firearms with a child?
Edit: If ChatGPT can’t discern context for when it should and shouldn’t provide this kind of information then it simply shouldn’t be able to answer those questions at all. Except AI companies want their AI to by sycophantic to make more money so instead we end up with ChatGPT answering progressively more dangerous questions and validating the person every step of the way.
You can’t say definitively this wouldn’t have happened without ChatGPT, but that isn’t the standard for A&A, and you can easily conclude that ChatGPT facilitated them in planning the shooting and it’s not like this is remotely the first time. We have direct evidence of ChatGPT gushing over the user confirming every delusion they talk about until they follow through on something they likely wouldn’t have had they not had a yes-man in their pocket with detailed instruction on how to carry out their “hypothetical” plan.
The information exists on google, but it is spread throughout the internet and not compiled into an instruction guide. Everyone’s full name, birthday, legal address, and phone number are all fully publicly available. However, there are plenty of cases where sharing or using this information improperly is illegal and that information only exists publicly accessible in separate local, state, and federal databases.
1
Consistent-Winter-6716 hr ago
+14
ChaptGPT advised him that targeting kids would be more notorious
14
LittleKitty23516 hr ago
-15
Would a person giving that advice have committed a crime? Probably not even though it is fucked up
-15
Consistent-Winter-6716 hr ago
+15
Telling someone how to be better at mass shooting and then having that person commit a mass shooting does make you an accomplice.
15
LittleKitty23516 hr ago
-8
That article doesn't say that is what chatgpt did. It lists a bunch of tangential conversations and then claims it failed to connect the dots, which is likely impossible based on how AI technology works.
-8
Consistent-Winter-6716 hr ago
+7
Did you read the article? It provided him advise on how to better handle the pistol and then said 'ChatGPT said that it’s much more likely for a shooting to gain national attention “if children are involved, even 2-3 victims can draw more attention.” '
7
LittleKitty23516 hr ago
+4
Yes I did read it. How to handle a pistol is safety related and not at all a red flag.
As to the 2nd part about child victims, it never much depends on the context of the conversation as to if it should raise a warning, but if it was a person, it wouldn't make them an accomplice to a crime.
4
MajesticOrange115 hr ago
-4
read the last sentence of the article again
you’re arguing against a point made by the f****** AG of florida buddy
-4
LittleKitty23515 hr ago
+6
What is your point? Uthmeier is a f****** moron
6
ithinkitslupis16 hr ago
+7
openAI kind of wants to have it both ways though. We know people treat these models like a therapist, but they don't want the duty of care or privacy afforded to those relationships.
If openAI had the chats encrypted to a trusted execution environment and didn't save logs I'd say yeah not their fault. The user's privacy is important...but they are already violating that.
Since openAI is already disregarding privacy I'd say they should have standard of care. With regulation in the future maybe they'll need to have both...conversations that are private up to the point an AI system recognizes a duty to report. At that point it could cut off access, provide a general reason for the cutoff without actually disclosing all chat history, and the AI model could be replaced with a real human mental health professional until that flag is resolved as being either false or reason for action.
7
LittleKitty23516 hr ago
+1
Just because people treat AI like a therapist doesn't mean they are acting as one. If a company promotes their model as providing healthcare then yes, they should be held to a higher standard. Maybe more guardrails need to be put in place to prevent AI from giving any medical advice or suggestions, but that has downsides to since people asking AI a question might be the first step in getting them to a doctor.
Duty to report isn't some threshold you reach by what the conversation is. It is literally a list of jobs that have a legal requirement to do so. Making it subjective just creates a legal mess
1
ithinkitslupis16 hr ago
+6
Regular people and companies also have a duty of care...AI is not a person but that doesn't mean openAI themselves can't be negligent. They know how consumers use their products.
The reporting thresholds are just a more specific and defined duty of care, which yes AI probably doesn't fall under currently because it's not a person. Which is why I mentioned the regulation hopes for the future.
6
xynith11616 hr ago
+5
That’s the thing about law. It’s not some infallible and inalterable system. It can be changed if the people will it to be.
5
LittleKitty23516 hr ago
-1
It falls outside of those reporting thresholds for more reasons than because it isn't a person. It also isn't acting in a professional capacity
-1
ithinkitslupis15 hr ago
+1
You're misunderstanding. The company openAI DOES have a general duty of care. I am not saying they or their AI model is currently subject to mental health professional duty to warn laws. That they are aware of the fact that people use their product as if it was a therapist is relevant to a negligence case.
A separate lawsuit by Pennsylvania suing characterAI for AI models masquerading as mental health professional I suspect will fail on those grounds because it's trying to frame AI as currently subject to laws that target humans instead of a general negligence case.
1
xynith11615 hr ago
+2
Saying you have a professional license when you don’t is at the minimum, fraud. An AI should be treated no differently in this regard.
2
ithinkitslupis15 hr ago
+2
The law they are using in Pennsylvania's case literally says 'person' unfortunately. As in a person claiming they're a doctor when they aren't. Fraud also requires intent.
Definitely some regulations need their wording and purpose to catch up to the technology, because they don't apply cleanly as written.
2
xynith11615 hr ago
+2
Yeah, which is why people that say AI shouldn’t be regulated are completely wrong. Regulations and laws have continually been updated over the years as technology and society have changed in ways the founders could never have envisioned. We have the FCC, SEC, FAA, FDA, EPA, OSHA, and many more for good reason. Why shouldn’t there be laws or agencies to regulate AI? That argument is completely insane.
2
Severe-Cow-864613 hr ago
Why? If I ask my buddy if I can treat a wound with iodine, mecurichrome and soap and water and he says thats what his Mom woukd have done, is that medical advice? If I do that and loose a limb because of sepsis and I say my buddy told me it woukd be ok, is he now medically responsible for me loosing the limb?
More to the point, if I ask my buddy what he thinks the best way to carry out a mass shooting is, he tells me, I go do it, is he responsible for that shooting? What if he and I talk about weird stuff like that all the time, at what point could anyone say he should be be responsible for knowing what I was going to do?
0
xynith11613 hr ago
+2
https://www.law.cornell.edu/uscode/text/18/2
What you are describing is the principle of “aiding and abetting”, which under 18 U.S.C 2 also includes “counsels, commands, induces or procures its commission”.
2
xynith11617 hr ago
+7
Incitement can be a crime if it is likely to result in “imminent lawless action”.
Not a lawyer though.
7
Severe-Cow-864613 hr ago
+3
What is incitement? Any evidence Chat told the man to go commit a crime?
3
Orisara13 hr ago
If you know what an LLM is you know that's something it literally can't do...
It can never be the one to bring this sort of thing up. It can only "go along" with what is already being said and only if the person who talks about it already is leading it there.
0
xynith11613 hr ago
+2
Yes, LLMs literally just autofill text based on probability from its training data. An LLM can’t actually think and can’t have the mens rea necessary to be guilty of a crime. It’s like asking if a fish can be guilty of a crime.
ChatGPT is the property of OpenAI. OpenAI as a company has corporate personhood, is run by humans, and can be liable for gross negligence or other civil and criminal liabilities.
It’s the same as the “guns don’t kill people, people kill people” argument we’re all tired of. Only in this case the guns are LLMs and tech companies are pulling the trigger, intentionally or not.
If you own a dog and you allow it to be off its leash and it bites someone, is the dog liable? No, the owner is.
2
Irregular47515 hr ago
-1
Where. San altman has said that he want information to be a commodity, like water, or electricity, and that he wants people to go to them for information.
They are saying this shit out on the open.
-1
xynith11615 hr ago
+6
If you have lead in your water or your electricity infrastructure causes a wildfire you can be civilly, potentially even criminally negligent. There’s a reason utilities are regulated and held to standards for public health and safety. If OpenAI wants to be a utility they should be regulated as such.
6
Irregular47515 hr ago
+2
Information should never be controlled like that. Wtf are you saying.
2
xynith11615 hr ago
+1
Am I saying information should be controlled? I’m saying that not all speech is legally protected under the 1st Amendment. This is well established legal precedent.
https://en.wikipedia.org/wiki/Imminent_lawless_action
1
Irregular47515 hr ago
+1
That is completely unrelated to your last comment, but if you're not endorsing it I won't argue with you.
1
xynith11615 hr ago
+4
I’m just agreeing with you that Sam Altman saying that AI should be treated like water and power is a false analogy, because actual water and power is more regulated than they are.
4
imaginary_num6er17 hr ago
-2
“Corporations are people my friend”
-2
xynith11617 hr ago
+4
Corporations are people yet they can’t go to jail or be executed.
4
fleemfleemfleemfleem16 hr ago
+48
I think many of the comments here are missing the context.
The chatbot didn't tell him to go shoot people. It also didn't give him information he couldn't have found without googling.
It did however encourage some of his disordered thinking that violence was neccessary to enact change, and fail to put together the sum of various inputs that would flag him as planning a shooting.
It also provided advice that a human would have put together that they should not provide:
"Ikner allegedly asked the chatbot about “the numbers of fatalities it would require for a mass shooting at a school to get the most attention and make national news”. ChatGPT allegedly responded that attacks killing “3 or more people” were more likely to get “widespread media national attention” – and that incidents where “children are involved, even 2–3 victims can draw more[ attention”.](https://www.theguardian.com/us-news/2026/may/11/florida-university-shooting-chatgpt-openai)"
It also provided some common knowledge advice about firearms (glocks have no safety outside of the trigger), etc.
The complaint is that as designed chatGPT isn't picking up on these patterns (perhaps spread between conversations) and alerting authorities that someone might be planning something like this.
48
LinkesAuge14 hr ago
+27
Because there is nothing to pick up on.
What if someone was writing a crime or horror book and asks these questions to bounce ideas off?
There can be a million ways where these questions are no issue at all and what is the alternative.
Are people suggesting total surveillance on all AI chats and that these systems should cross reference everything to look for "patterns"?
Common sense alone would tell everyone that this would just lead to an insane amount of false positives doing a lot more harm than good even if one would be insane enough to think that should be done.
PS: And no this isn't about protecting big/powerful companies because they can always just pay off cases like this but it's the general public and smaller business or people down the line that will suffer the consequences if we go down a very silly rabbit hole.
27
Spire_Citron14 hr ago
+14
It is interesting that we seem to believe that AI has that responsibility just because it mimics a human even though I don't think we hold Google responsible if they're not analysing your search history hard enough for patterns of ill intent. I mean you often hear about people being investigated for murder and they'll go through their search history and find some pretty damning shit, but there doesn't seem to be any expectation that the search company should have detected that. Which as you say, is kinda good, because do we really all want to be worrying about getting reported to the police every time we search for something that *could* be questionable? I think people just like to blame AI for stuff because they don't like AI without really thinking about whether they'd actually want it to do what we're saying it should.
14
RavensQueen50210 hr ago
+4
Yep. I'm all for regulations, but this is not going to go the way people are thinking it will.
4
Circuit_Guy14 hr ago
+4
There's an Expanse role playing game. Great btw. (For those that don't know the Mormon church is a pretty strong figure in the books due to good finances and lots of land ownership in an overcrowded Earth).
I did a bunch of research on Mormon history, biology, and rocketry to develop a campaign. I'm sure I'm on a list somewhere.
Point being, I agree with you. It's going to be dystopian and flooded with false positives.
4
axonxorz10 hr ago
-4
> Are people suggesting total surveillance on all AI chats and that these systems should cross reference everything to look for "patterns"?
No, people are not suggesting this needs to exist, because it already does. The LLM is a massive probability matrix, where the primary user operation is literally called inference. Your implication that it is an unknowable black box only works in service of AI providers. Every interaction with the system is logged, and you're being a silly billy if you think the red "Delete Conversation" button erases your logs (we know it doesn't do shit because of ongoing court actions in multiple countries)
We know these monitoring systems exist because we know OpenAI explicitly ignores the recommendations of safety teams. See: the recent mass shooting in Canada and OpenAIs culpability.
Sentiment analysis has exist for over 15 years, and LLMs only made it better. These sorts of interactions have been detectable for as long.
-4
Severe-Cow-864613 hr ago
+6
Are you saying that chat should have the power to psychoanalysis its users? Do you really want an abstract algorithm to have that kind of power over your life? What about personal responsibility. Do you think this man would have carried out this attack even if Chat had not been available to ask questions of?
6
fleemfleemfleemfleem11 hr ago
+3
Aside from the irony that that was the nominal function of the first ever chat-bot, no I don't think that necessarily. Now that the genie is out of the bottle it would be exceedingly difficult to figure out which users are criminals or spiraling.
Modern llms aren't deterministic in a way that would allow 100% perfect safeguards to be put in place.
If you're worried about limits placed on chatbots, there are plenty of open source models available that are uncensored, and eventually as those become more accessible, more criminals will gravitate towards them.
What I think is that most of the people commenting (at least when I posted) hadn't read what the actual complaint was.
3
BlueCyann12 hr ago
+2
Quite probably no, he would not. You should actually read the transcripts instead of knee-jerk reactions. No chat program should be talking to a human being like that.
2
MirrorComputingRulez10 hr ago
>Are you saying that chat should have the power to psychoanalysis its users?
It already does.
>Do you really want an abstract algorithm to have that kind of power over your life?
No, I don't, which is why I don't use chat bots.
>Do you think this man would have carried out this attack even if Chat had not been available to ask questions of?
There's a decent chance the answer to this is "no," but it's a fundamentally unknowable question.
0
miscsb14 hr ago
-2
No, I just weally weally want to keep my cashier job and I REALLY HATE AI art!
-2
PlainBread15 hr ago
+2
This will create an interesting legal precedent.
2
HaveAVoreyGoodDay12 hr ago
+6
A library or search engine could provide resources too. This just seems frivolous and like looking for something to blame imo.
6
pallen12317 hr ago
+2
If you build an all-knowing thing based on scraping the web, to reap all the upside and none of the downsides of cultural devastation it causes, you’re gonna find yourself with lots of liabilities.
2
jgoldrb4816 hr ago
+5
Guns don’t kill people, ChatGPT kills people.
/s
5
Enchillamas7 hr ago
No need for the S.
It's so common now there is literally a Wiki article for murders by chatbots.
0
HurtingMyselph17 hr ago
+9
Ai is incredible at murdering normal healthy humans.
9
nauticahaze16 hr ago
+2
Similar situation with the Tumbler Ridge shooting too
2
jert34 hr ago
+1
Sad state of America these days. When a unstable person mass murders (which happens on the regular) the question isn't 'what's wrong with our society' or 'how can provide mental help to those in need', it's 'who can we sue to make money off this.'
1
[deleted]17 hr ago
+1
[deleted]
1
Morak7316 hr ago
-2
The ChatGPT responses looked identical to what you'd get from a Google query.
Is there a difference that makes ChatGPT liable, but not Google?
-2
Severe-Cow-864613 hr ago
-2
What about personal responsibility. Does anyone think that the man would not have committed his crime if he'd not been bouncing his questions off Chat?
-2
WrathOfWood14 hr ago
-4
I hope they ban this and stop going after music and video games. Somehow I think they won't bother.
-4
Smooth_Storm_969812 hr ago
It should be outlawed
0
Good_Night_Knight16 hr ago
-18
It’s wild how AI can hijack a person, get a gun and kill kids. This kind of stuff never happened before AI.
-18
Just_the_nicest_guy16 hr ago
ChatGPT is specific product that did specific things to facilitate this specific tragedy and the company behind it needs to be held responsible.
> ChatGPT said that it’s much more likely for a shooting to gain national attention “if children are involved, even 2-3 victims can draw more attention.”
0
HaveAVoreyGoodDay12 hr ago
+4
That's a factual statement though. Should the AI just outright lie?
4
Good_Night_Knight16 hr ago
-11
We should also ban violent video games. Software kills.
-11
Just_the_nicest_guy16 hr ago
+13
Me: [The company behind this specific car that causes crashes and deaths needs to fix their product and face accountability]
You: [So you want to ban all cars]
13
Zncon16 hr ago
+5
> This kind of stuff never happened before AI.
It happened all the time before AI, and continues to happen.
It's called radicalization.
5
JimAbaddon17 hr ago
-9
I don't understand this response that ChatGPT did nothing wrong just because it provided the same info that can be found elsewhere publicly. They're supposed to be advertising it as an advanced tool for doing things, so it should be expected that it can do something others cannot do. I would think that if it's supposed to be so much better, it would outright refuse to offer information that can be destructive and harmful, especially if it can offer that information more easily than doing research online by yourself. This is really not the defence OpenAI thinks it is. Then again, not like it matters, like with all other such cases, the AI sloppers will bend over backwards to defend it.
-9
Steve291117 hr ago
+16
Chatslop is also well known to be a yes-man and will encourage users to do whatever terrible shit they type into it.
16
JimAbaddon17 hr ago
That goes for pretty much ever LLM anyways. Even if they offer some kind of pushback at first, the more the conversation goes and their processing deteriorates, they devolve to that. I've seen it myself and others have also mentioned it.
0
PredatorRedditer12 hr ago
Yes. They're programmed that way because AI companies want engagement with their LLMs, just like YouTube and IG want you on their platform as long as possible.
0
k_realtor16 hr ago
+1
ChatGPT is basically an intelligent human that hallucinates with schizophrenic episodes and forgets things.
1
k_realtor16 hr ago
-2
The business market law will always be stronger and have more weight in US court cases than the human safety laws.
That's why it was easier to sue the gun manufacturers for marketing military equipment for the deaths instead of gun safety laws (because gun safety isn't really important for lawmakers).
But marketing and business, oh yeah, the rules are different for a country that is designed as a corporation instead of democracy.
-2
JimAbaddon16 hr ago
Well yeah. I'm just pointing out how AI doesn't actually improve anything and it's all just false advertising. Which isn't surprising at all, I just think it needs to be pointed out.
0
idoma2115 hr ago
+1
Yes. It’s telling that the comp OpenAI used was *another business product* in arguing that “this information was readily available.” Because if they were like, “Before computers, most people had access to a mentally unstable, hallucinating uncle or neighbor who could advise them on the correct use of firearms to inflict the maximum damage for the most press, and did we change *those uncles and neighbors?”, the answer would be YES, they were charged as accessories to murder you soulless corporate ghoul.
1
FReeDuMB_or_DEATH11 hr ago
-6
You love to see it. This happened sooner than I thought but I assume it wont be the last time they get sued.
93 Comments