I believe lawsuits are going to be what finally reigns in AI. AI creators (Grok, ChatGPT, etc.) will have to make modifications to keep their product from creating libelous/illegal content.
187
TomcruizeiscrazyMar 25, 2026
+65
Agree but all it takes is within the next 2.5 yrs the trump admin helps push a case to the Supreme Court for some edge case they grants immunity or fair use of content for AI.
65
SEA2COLAMar 25, 2026
+14
I also look at AI products the way we used to look at RealPlayer (remember them?). There were few video playing services at the beginning, but it didn't take long before everyone had their own.
14
Low_Pickle_112Mar 25, 2026
+4
It certainly won't be social consequences & externalities that get any consideration from these tech companies.
4
gordonpamseyMar 25, 2026
+2
That and the economics, Disney offered a deal to OpenAI to work with Sora. OpenAI choose to shutter it instead which signals to me. That these AI video models are at this point in time so expensive it's near impossible to get ROI on them. If a deal with Disney is not enough to keep Sora alive. X would be better shuttering the video aspect of Grok as well. Since it's nothing close of Sora, and has significantly only brought them into legal trouble.
2
AwesomegcrowMar 25, 2026
When most of those lazy AG (from Blue States since Red States' AG won't do anything) finally decided to work and join in the lawsuit like this...
0
Fateor42Mar 25, 2026
+20
Here's the [full complaint ](https://dicellolevitt.com/wp-content/uploads/2026/03/Grok-Deepfake-Lawsuit-City-of-Baltimore.pdf)for anybody that's interested.
20
bobdob123usaMar 25, 2026
+15
I know it really isn't his lawsuit event though it says Mayor of Baltimore, but Brandon Scott has been one of the few politicians in MD that has actually impressed me. Seems to actually care instead of following a bunch of talking points. Which also means the rest of the establishment will probably squash him when his term runs out.
15
JustHereForCookies17Mar 25, 2026
+5
Seconded l. I'm a big fan of Mayor Scott. I live in DC & I wish he was our mayor.
5
Doom-SleigherMar 25, 2026
+5
Get off X platform. Musk is for pedos
5
ViperThreatMar 25, 2026
+4
I'm doubtful this will manifest into anything. Don't get me wrong, I wish it did, but the simple fact is that precedent is not on our side.
Like other things, our laws typically say that tools cannot be blamed for how people choose to use them. "guns don't kill people, people kill people" all over again.
4
JcbAzPxMar 25, 2026
+11
I don't think Grok will have the excuse of only being meant for making salacious n**** of animals. Or that it stripped that picture of a kid in self defense.
11
jimsmiscMar 25, 2026
+3
It seems like the ai tools are cracking down on deepfakes. As a joke i tried to create an image of myself with the swedish bikini team (is that even still a thing?) And got my hand thoroughly slapped when the ai refused to edit any pictures with "bikini" in it
More concerning perhaps is that I was testing grok for use in ads for an exercise product. I asked it to show something like "attractive woman in flirty fitness gear" . It generates multiple images at once for you to choose. It did a great job but I was pretty shocked when one of the pictures was obviously a girl, not a woman. I have to assume that with creative prompting, some incredibly disturbing imagery would come out.
3
rufio313Mar 25, 2026
+2
Yeah Grok has been over moderated in response to all this drama for a couple months now. A lot of people are complaining just regular prompts that aren’t remotely sexual are getting moderated and blocked now.
Since Grok was trained on p*** it’s actually more effective to not say overtly anything sexual and let the training model fill in the gaps with sexual content. It’s getting better at catching itself doing this but it’s still fairly easy to slip past moderation this way, at least with fully AI generated stuff. I think they cracked down harder on uploaded images (which is good).
2
suvlubMar 26, 2026
+1
You know that "salacious n****" are not the only thing they can generate, right? Are you under impression that someone at some point had to go out of their way and add capacity to generate them? That's not how it works.
1
JcbAzPxMar 27, 2026
+1
That doesn't matter. It can and there's no legal justification for it being able to.
1
suvlubMar 27, 2026
+1
Then it doesn't matter that gun can be used for self-defense and hunting, it can be used for murder and there's no legal justification for being able to,
1
CantaloupeMedical951Mar 25, 2026
-1
just as A gun isnt responsible if its user uses it to commit crimes, an LLM isnt responsible if its user uses it to commit crimes
-1
JcbAzPxMar 25, 2026
+3
But the creator can be responsible for giving it the ability to do crimes in the first place. There is no legal justification for creating n*** images of real children. Guns have legal reasons to use them to kill.
3
thevictor390Mar 25, 2026
+3
The rub here is that Grok is not just a tool. It is a service that you can hire to operate the tool. In fact, you cannot operate the tool directly, yourself.
So sure, you cannot blame a tool for being capable of doing illegal things. But what happens when you hire a company to do those illegal things for you? And the company cheerfully complies? And, to top if all off, publishes those illegal things publicly? To make the gun analogy work, you would need there to be a company that runs a service that allows you to tell them where to point and shoot their gun.
The real debate is not about the tool, but about the service. How much blame can you put on the company that runs the service that is misused? What about after they were notified of the issue?
22 Comments