· 150 comments · Save ·
News & Current Events Apr 3, 2026 at 9:42 PM

‘Disturbing incident’: Police investigating after Lake Zurich High School students distribute AI-generated n*** images of classmates

Posted by SpicyPandazor


'Disturbing incident': Police investigating after Lake Zurich High School students distribute AI-generated nude images of classmates
Lake and McHenry County Scanner
'Disturbing incident': Police investigating after Lake Zurich High School students distribute AI-generated nude images of classmates
A police investigation is underway after students at Lake Zurich High School were reported to have used AI to generate sexually explicit images of female classmates, with school officials calling it "disturbing."

🚩 Report this post

150 Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
Chiron17 Apr 3, 2026 +615
Teens and AI is a match made in hell. It's going to end up horribly for everyone.
615
questron64 Apr 3, 2026 +266
The wisdom and restraint of a teenager combined with the wisdom and restraint of AI companies.
266
Prestigious_Ad5314 Apr 3, 2026 +93
Yeah. “Monkey with a machine gun” meme comes to mind. Or would, if Trump didn’t already own it.
93
collectivebarganing 5 days ago +1
Great band name, monkeys with machine guns
1
Much_Guest_7195 5 days ago +1
He's more like a horse in a hospital... https://youtu.be/JhkZMxgPxXU?si=xpPoawvIGAt7-QDg
1
[deleted] Apr 4, 2026 +4
[removed]
4
Due_Examination6139 Apr 4, 2026 +7
I'm honestly surprised this does not happen a whole lot more.
7
JustHereForCookies17 6 days ago +12
I'm sure it does, but they're not getting caught. 
12
goldbloodedinthe404 6 days ago +7
For certain there are probably a bunch of guys with ai generated p*** of class mates that they just use for their own personal spank bank.
7
[deleted] Apr 4, 2026 +31
[deleted]
31
VagabondReligion 6 days ago +7
Reminds me of the 'global village' the internet was going to be. No one thought it would be corporate data mining of every possible human pathology.
7
TheColossalX Apr 4, 2026 +12
what generative ai was publicly available to teens in january 2021? deepfakes have existed since 2017 or earlier but that’s not the same thing.
12
[deleted] Apr 4, 2026 +3
[deleted]
3
numbermaniac Apr 4, 2026 +11
ChatGPT was released in November 2022. That's the one that really exploded all of this into public consciousness.
11
RolloTonyBrownTown 4 days ago +1
Just liquidating critical thinking skills for an entire generation.
1
tkMunkman Apr 3, 2026 +872
I hope they dont punish the victim again :/
872
Joebebs Apr 3, 2026 +213
again??? Wait what??
213
tkMunkman Apr 3, 2026 +529
https://www.wsaw.com/2025/11/12/girl-13-expelled-hitting-classmate-who-made-deepfake-p***-image-her-lawyers-say/
529
cloistered_around Apr 3, 2026 +244
Shw punched someone out and got into some trouble. *Then* the group that was sharing the photos started to get in trouble after that.
244
ClaudeGascoigne Apr 3, 2026 +186
What an insane timeline we're living in. People can make AI CSAM based on a real person, however the **f***** that's even possible, and nobody bats an eye until the victim starts throwing hands. It's some real *Black Mirror* shit.
186
breadandbunny Apr 4, 2026 +14
Oh my f***. I have been calling it the Twilight Zone, but Black Mirror fits so much better! So true.
14
Fair_Blood3176 5 days ago +3
Black mirror is definitely the modern day Twilight Zone.
3
Crocodilian4 Apr 3, 2026 +18
It’s possible because the models were trained on CSAM for (presumably) this very purpose.
18
ShiningRayde 6 days ago +15
All I know is, Epstein got murdered and suddenly every billionaire on his list started investing hard in the computer that gives them the advice and... vices, that he used to provide.
15
TucuReborn Apr 4, 2026 +22
Technically not even required. The way an image gen model works, it basically "learns" what things look like based on text tags given to an image. So the model knows what a child looks like, because it was told what images have a child. It knows what b**** look like, because it was told what images have b****. It knows what yoga looks like, because it was told what images have yoga. None of these topics, on their own, is inherently problematic content. But if you have a model that has minimal or no filtering, or you just learn how to prompt in ways to get around the filtering, you can combine things in problematic ways. It is possible that some degree of CSAM is present in training data, but likely not intentionally. Intentionally including it is a liability nightmare, no way around it.
22
ObservableObject Apr 4, 2026 +20
Right, this is always brought up and there are just too many examples of cases where the idea of the model needing to be trained on something in order to make it doesn't make sense. I can go ask for a photo of a cow wearing a space suit on the moon, and there's definitely a model that will spit one out. I can almost guarantee you that no major AI models were trained on pictures of cows wearing space suits. It was, however, trained on a lot of photos of cows. And a lot of photos of space suits, and people, and people in space suits. It can reasonably figure out what the style of a space suit would be, that it covers the major appendages, that it has a helmet where you can see the face. It knows what a cow's head is, so it knows to put the helmet there. It knows what is roughly considered the body, and the appendages, etc. That's *especially* true in cases like the article, where they're doing this to a real person, since i2i and i2v already gives it so much to go off. It already has the entire face it needs, potentially the body shape, etc. And the fact that it is a teenager makes it even easier, since the physical difference between a petite 18 year old (which a lot of AI models were definitely trained heavily on) and a 16 or 17 year old aren't really that crazy, vs like an 18 year old and a 10 year old.
20
OwnBattle8805 6 days ago +4
Bullying has been a problem in society for millennia. I’m not dismissing it but it’s not surprising.
4
nyg1219 Apr 4, 2026 +2
I'm a little not up to date on the lingo... Does csam stand for child sexual assault material? Or?
2
natanaru Apr 4, 2026 +10
Child sexual abuse material but yes its basically the same idea.
10
nyg1219 Apr 4, 2026 +3
Okay. Thank you.
3
androshalforc1 6 days ago +10
Honestly that behaviour has been happening for decades Group of bullies picks on someone, school does nothing, picked on person finally snaps, then they get punished.
10
spicychickenandranch 6 days ago +6
Punishing the victim??? Have we lost the plot???
6
androshalforc1 6 days ago +15
It’s always been the plot.
15
inverimus 6 days ago +7
They were punished for violently lashing out. This isn't new, it happens with bullying all the time.
7
breadandbunny Apr 4, 2026 +3
Sigh. Knowing this is probably what's going to happen is utterly maddening. I hate this place.
3
sayn3ver Apr 4, 2026 +80
"Another parent said the number of victims is more than 40 girls and ranges from high school to fifth-grade." That's vile
80
kstargate-425 Apr 3, 2026 +184
This is only going to get worse and instead of going after the AI like Grok that does this they will collectively punish everyone by stripping more of our privacy and rights making everyone enter biometrics or ID's to access anything on the internet 😒
184
darwinevo Apr 4, 2026 +50
How about educating the boys to be decent humans, not future rapists?
50
cosmic-untiming Apr 4, 2026 +42
You can only do so much for those unwilling to learn.
42
darwinevo Apr 4, 2026 +27
Well when a country institutionalises pedophilia by electing a pedo president, what can you really expect. Business as usual.
27
TheBluePriest 6 days ago +9
I think news stories like this so a bad job of making us see the reality of the situation. There are billions of people in the world. Hundreds of millions of those are teenage boys. For every one kid that does this, hundreds of millions aren't. 1 kid doing this doesn't mean that boys aren't being educated to be decent humans. I'm fact, it moreso says they are, because if they weren't, this would be an epidemic. Is such a horrible thing to happen that we will hear about it every time. Don't get me wrong, one time is too much, but one time out of hundreds of millions isn't a systematic issue like you are implying it is
9
TylerFromMillerTime 6 days ago +46
Woah I think you’ve figured it out! Someone get to the news! We just need to tell everyone to be decent to each other. So simple! You did it! World peace!
46
darwinevo 6 days ago +7
Educate. Don't outsource parenting. Teach them about consent, right and wrong. Do the bare minimum, so society doesn't have to rehabilitate your f*** up.
7
Raider_Scum 6 days ago +4
Sure, this has always been the answer. But we also need to engineer our world with the understanding that many people wont do this, that many humans are either raised incorrectly, or just lost the brain l****** and are psychopaths. We cant place all our stock into trying to raise every human perfectly. Bad people will always exist, so we need to create safeguards that protect the rest of society from them.
4
SlinkySlekker 6 days ago +4
Exactly. A huge part of the problem is how boys are being raised to disregard the value and rights of girls and women.
4
SeaTurtleLionBird Apr 4, 2026 +9
Only problem is the worlds biggest rapist is our President
9
parabostonian 6 days ago +5
Hey, don’t call it that slave name. Call it by the name it chose for itself: MechaHitler. They want to get rid of jobs and strip our rights to empower MechaHitler.
5
SpectralMagic Apr 3, 2026 +91
What is Grok attending the classes?
91
HyperionSwordfish Apr 3, 2026 +18
Just Elon.
18
darwinevo Apr 4, 2026 +6
No it's same old boys being boys, future rapists and all.
6
GuaranteedCougher Apr 4, 2026 +48
AI image/video generation was such a terrible invention. Barely any positive uses for it and now it's way easier for people to create child p***, spread misinformation, undercut everyone in creative jobs and take the human soul out of our culture.
48
Boollish Apr 3, 2026 +456
Is anyone surprised at this point? Contrary to the claim that AI will cure cancer if we just invest another $200 billion into data centers that can summarize an email, the only two actually financially viable cases for AI are illicit p*** and scamming the elderly.
456
reala728 Apr 3, 2026 +93
No surprise now. Wasn't a surprise the last time it happened. Won't be a surprise the next 1000 times it happens. The surprising part about any of this is they just don't do anything about trying to regulate it to prevent stuff like this.
93
JeanieIsInABottle Apr 3, 2026 +47
Well, all of our politicians are bought by the people who make and invest in this shit, so the lack of common sense regulation is actually par for the course.
47
Far_Kangaroo2550 Apr 3, 2026 +8
I'm surprised an edgy vigilante hasn't started trying to distribute ai p*** of politicians and their families.
8
crunkadocious Apr 3, 2026 +12
That's 100% happened already though
12
Far_Kangaroo2550 Apr 3, 2026 +2
They were spamming letters/emails to the politiciians and @ ing them on Twitter and stuff? Thats beautiful if so. I dont have the balls to do that tbh.
2
SoftlySpokenPromises Apr 4, 2026 +1
The whole 'feature not a bug' mentality
1
NKD_WA Apr 3, 2026 +9
There's lots of regulations we should put on this AI garbage. But I don't think any of them would prevent this. At least not any more than you could regulate someone Photoshopping their classmates head onto a p*** image. This type of stuff is extremely low-effort for AI and can be done on a phone or PC without access to entire datacenter full of GPUs or any models from a major AI slop vendor. In other words, theres not much you can do about some mouthbreather in Russia posting some open source "generate a n*** image" app. I guess all we can really do is appropriately punish people who create and distribute it.
9
[deleted] Apr 3, 2026 +6
[removed]
6
unit187 Apr 4, 2026 +5
To be fair, running a local open-source model is a HUGE friction point. If a teen wants to troll their classmates using local AI, they need 1) powerful enough PC 2) technical knowledge to setup and use things like ComfyUI 3) patience they don't have. These friction points would reduce the AI abuse by 99%, especially among teens. I just don't see a troll kid watching hours of boring tutorials learning how to use ComfyUI.
5
Leaflock Apr 3, 2026 +12
Reading an email and then acting on it will be the most financially viable case. Just imagine how many jobs boil down to “protocol converter for natural language to a system”, in other words, read an email, log into a system, enter or retrieve data, and return the email. That must be billions, if not trillions of dollars of work that AI can do easily.
12
IndividualChart4193 Apr 3, 2026 +10
Oh, and approving human drugs in days rather than years. Check out the latest glp type drug approved…in 60 days bc of AI …yeah, not inspiring a lot of confidence.
10
Major_Muggy Apr 3, 2026 +7
The AI used for medical research and cancer treatment is vastly different from the image and chat ai's being use for p*** and scams.
7
Gorge2012 Apr 3, 2026 +8
Hey, let's not forget political disinformation. They are working hard to make sure AI is unregulated for that purpose.
8
Nytshaed Apr 3, 2026 +12
Coding seems to be financially viable. Also I work in biotech and AI driven personalized medicine really is something that is being worked on.
12
Squire_II Apr 4, 2026 +24
> Coding seems to be financially viable. Only in the short term, if that. Years of vibe coding is going to create insurmountable technical debt and companies full of "programmers" who don't have the first clue as to how to read their own vibe-coded "work".
24
Sword_Thain Apr 3, 2026 +12
You should look at all the recent vibe coded Windows patches that MS has to roll back within hours of release and get back to us. They introduced AI to Notepad and that allowed viruses to be deployed via Notepad. But processing large amounts of information could be good. If it doesn't hallucinate and tell you to inject bleach.
12
Nytshaed Apr 3, 2026 +10
Ya companies need to be less ambitious with the AI coding. We use it, but it's always reviewed by people and tested. I think it's really powerful for good coders, but it doesn't replace needing a good coder and process. For the medicine thing, it's antibody design, not like personalized advice to a layperson. Taking data about the surface of pathogens or cancer cells and predicting antibody designs that would do well targeting it. Then manufacturing the antibody for that person to treat the disease.
10
clashrendar Apr 3, 2026 +4
Actually had Apple's AI lie to me in the email summary last week, so it can't even do that with any reliability.
4
zzztoken Apr 4, 2026 +2
No one was surprised the first time something like this happened. People who have actually been paying attention have been screaming about this happening since day 1 of this AI image shit.
2
Biobooster_40k Apr 3, 2026 +2
Don't forget the cat videos. I'm surprised at how good they've been able to produce Cat vs Godzilla shorts to the point where they get the suitmation down as well as the VHS quality. $200 billion well spent if you ask me.
2
redyellowblue5031 Apr 3, 2026 +9
I’m also frustrated and see many downsides. Though, this is an overly simplistic view of what “AI” broadly can be used for. To give a tangible example that can help millions of people: Weather forecasting. Boring, unsexy, and mundane to most. But machine learning “AI” models have quickly started to catch up to and exceed conventional models we rely on. They take less compute and run faster. This is already a thing and will continue to grow. This can be instrumental in understanding risk for a hurricane’s path, severe weather like tornadoes, or building more accurate climate prediction models. Again, not saying we should go full tilt and just trust tech CEOs on this (we emphatically shouldn’t), but I do think you’re perhaps being hyperbolic in how little use they have in the right context.
9
crunkadocious Apr 3, 2026 +13
When folks say AI right now, they're mostly referring specifically to LLMs. I don't know which kind of AI you're referencing (machine learning is really broad) but LLMs are what the datacenters are being built for.
13
redyellowblue5031 Apr 3, 2026 +8
I understand that the vernacular use of the term "AI" currently refers largely to LLMs/GenAI kind of things largely due to OpenAIs success in co-opting the term for chatGPT. I'm simply trying to highlight that [really cool stuff](https://www.ecmwf.int/en/newsletter/178/news/aifs-new-ecmwf-forecasting-system) is also in the realm of what this kind of computing can do. If we shrug all "AI" off as "just" LLMs and image/video generators, we're going to miss some pretty awesome opportunities to advocate for truly helpful things.
8
TucuReborn Apr 4, 2026 +5
And even "just" LLMs and image gen are still useful tools. LLMs make excellent rubber duck exercises, as long as you remember it's a rubber duck exercise. Image gen can be useful for placeholders, rapid concept visualization, or even personal use. And we've seen useful advancements in image/video gen being put into other tools, helping speed up and optimize tedious processes.
5
[deleted] Apr 3, 2026 +3
[removed]
3
crunkadocious Apr 3, 2026 +8
How much of total AI spending does that subset account for? Maybe that's why it's the focus.
8
burndata Apr 3, 2026 +3
Here's an idea then, shut down the stupid LLMs popping up everywhere and covert all those data centers to focus on curing cancer. Billy doesn't need to generate a rainbow farting dinosaur with lasers for eyes and a cannon in it's chest. But we sure could use those CPU cycles to actually do something useful.
3
Retireegeorge 6 days ago +1
Narrator: Surprising it turned out the most profitable scams were those that targeted the elderly performers *in* illicit p***.
1
VampireHunterAlex Apr 4, 2026 +14
This is all only going to get worse before it….gets even worse….
14
throwawaytheist 6 days ago +15
Korea made laws banning Deepfakes because oh how frequently Korean boys were doing this to classmates and Korean men were doing it to co-workers.
15
Blighton Apr 3, 2026 +12
People will keep doing it as long as there is no punishment. A slap on the wrist doesn't stop anybody from doing this
12
lostroadrunner22 Apr 3, 2026 +40
Is that an AI image of the school?
40
Toomanyeastereggs Apr 3, 2026 +92
No. My understanding is that all schools in the US look like someone combined a shopping mall with a prison.
92
dumbasstupidbaby Apr 3, 2026 +28
... Hold up I need to go Google what schools in other countries look like. You telling me your schools didn't have inch thick iron doors? No windows in the classrooms? Security guards stationed and patrolling throughout? Cameras at every entrance?
28
arand0md00d Apr 4, 2026 +12
Mine has 50 cal nests on the roof 
12
Fallouttgrrl Apr 4, 2026 +3
And of course all those p**** inspection days?
3
AnotherBoojum 6 days ago +1
My primary school barely had a fence 
1
LeMatDamonCarbine Apr 3, 2026 +4
yeah, more or less
4
MasterCheef117 Apr 4, 2026 +9
It is not. I actually went to lzhs. The large building further back is the theater addition they built in the early 2000s. The part of the school in the foreground is actually the back of the school where buses came to pick kids up. 0% ai.
9
Rugby562 Apr 3, 2026 +4
Looks like an early concept rendering
4
KaptainKardboard Apr 3, 2026 +2
Well, at least it's not an unclothed image of the building
2
Tater_Mater Apr 4, 2026 +20
That is f****** disgusting
20
Kind-Philosopher5077 Apr 3, 2026 +9
Is there one of these every week!?!? AI should not be able to create underage p**********, or any p**********, but underage is undeniably damaging to humans and if it can do this why wouldn't it do worse...
9
anormalgeek 5 days ago +1
The technology is already out in the wild. There are open source packages you can download, modify as you see fit, and run on your home PC. It is obviously a lot slower than a big data center, but the end result still gets created. Pandora's box has already been opened. There is no way to stop this at the point of creation. The computer has no morals, anymore than a hammer would. It just does what it's told. The only way to confront this is to address the humans in the loop. Someone has to tell the software what to do. Educate, and punish those people.
1
MooseOnMushies Apr 3, 2026 +108
charge the kids doing this with distribution of child p**********
108
s-kennedy Apr 3, 2026 +104
And the AI company for producing it, unless there's real consequences for them this will keep happening (and as long it's not just a slap on the wrist levels of punishment, we will see how fast these AI companies build guardrails that actually work)
104
snooze_sensei Apr 4, 2026 +4
The technology is already out there. Local image generation exists that can be done offline on any desktop pc, completely free of guardrails built into the online tools.
4
SolWizard 6 days ago +2
Yeah it probably wasn't done using one of the public models. I don't think they'd allow anything like this
2
CptUnderpants- Apr 3, 2026 +37
Production, possession, and distribution. Not sure if all three are separate crimes in that jurisdiction but it is where I live.
37
darwinevo Apr 4, 2026 +5
They'll get 3 months, if not less.
5
TauCabalander Apr 4, 2026 +4
... or a job in the current administration.
4
ubix Apr 3, 2026 +21
It’s weird how willing people are to criminalized the kids, but don’t even mention the people working at companies that code AI to produce child p***
21
Cubey42 Apr 3, 2026 +34
its not that its coded, and it doesn't have to be trained on csam. A common misconception is that AI has to be trained on something in order to make it, but the reality is that AI can also understand multiple things and make something out of it. just like how a giraffe mixed with a mouse isn't something the AI is inherently trained on, but can still create a result comparable. Like if we said I only trained an AI with only videos of a man playing a piano, and there was no women playing piano in the dataset, but there was women in the dataset elsewhere, it would still be able to produce a result with a women playing a piano because it could take two concepts "a women" and "playing piano" and understand the context.
34
Figshitter 6 days ago +1
>its not that its coded, Surely the policies and internal safeguards that should protect against the creation of child sex abuse material would also be considered part of the 'code'?
1
Cubey42 6 days ago +6
Well that's just it. When they say "policies and internal safeguards" most of that is just layers of interfacing with the actual model. Just like how before you get to drive a car (let's say AI is the car) you still have to buy it, get insurance, and a license. (There are multiple clever ways to guardrail in the dataset but they aren't ironclad) But basically, it's all just frontend that helps keep a model from doing things they don't want it. The issue is much like cars, if you have it you can mod it however you please, but that's usually why big companies don't really share the full car and just let you test drive it around their track (because there are guardrails there) but there are plenty of cheaper cars that cost nothing and can be suped up however you want, because at the end of it all, it's just math.
6
CraneBoxCRP Apr 3, 2026 +17
Because they created it and distributed it.
17
MooseOnMushies 6 days ago +3
A camera allows you to produce child p***, yet the majority of people don't use them for that.
3
sportsworker777 Apr 3, 2026 +7
*Bah God, that's Punch Kid's music!*
7
hausofmiklaus Apr 3, 2026 +7
Sam Altman we will try you for your crimes
7
JeanieIsInABottle Apr 3, 2026 +79
AI has been nothing but a net-negative for society, I don't know how people can see it being used for shit like this and then defend it. Jesus christ.
79
360walkaway Apr 3, 2026 +10
People thought it would be like the engineering computer in Star Trek, but it's just a media machine and plagiarism generator.
10
eastsiderhere Apr 4, 2026 +2
They had the 3D immersive Holodeck. Think about that.
2
VicViolence Apr 3, 2026 +16
The thing is, this was all inevitable. The technology was going to be developed by someone, no matter what, and it was going to be put out there, no matter what. What we didn’t need to do was immediately start building the entire infrastructure of our society on it
16
Zenshinn 4 days ago +1
Drinking alcohol is legal. What the net positive and net negative of that?
1
No-Midnight-2187 Apr 3, 2026 +25
Reminds me of a get-together with my gfs friends and one woman was saying how her boyfriend put a pic of his gf and another friend thru AI. It had them making out together and showed a few people, she made comments about how he thought it was hot…but she was pretty uncomfortable about it and kinda clear he was a creep… AI sucks
25
Sladay 6 days ago +5
Oh this is Illinois, they just updated state law to cover AI generated images both criminally and allowing victims to sue in civil court. Plus they updated the CSAM statute to also include AI generated images. Edit: In Illinois, non-consensual dissemination of private sexual images—often called "revenge p***"—is illegal under 720 ILCS 5/11-23.5. It is typically a Class 4 felony, punishable by 1 to 3 years in prison and up to $25k in fines. Sharing these images is illegal even if the subject originally consented to the photo/video being taken. Additionally the CSAM statute was updated to include Creating or sharing AI-generated child p*** is a Class 1 felony (4–15 years in prison). Possession with intent to view is a Class 3 felony (2–5 years). Subsequent offenses can become Class X felonies. Additionally victims of non-consensual image sharing can pursue civil remedies in parallel with any criminal case, allowing a victim to be awarded the greater of actual proven losses or $10,000, profits received by the defendant for selling any images, court awarded punitive damages, and making the defendant pay for the plaintiff's attorney fees and court fees. Victims can also seek injunctive relief with court ordered takedowns and restraining orders against defendants. A lawsuit must be filed within 2 years of discovery that an image had been shared or even the threat of an image being shared. Court documents can be filed using a pseudonym to protect the victim's identity. Victims also have rights under the Illinois Crime Victims Compensation Act(Victims may also be eligible for up to $45k in state assistance to cover therapy, lost wages, or relocation costs, provided they file a police report and cooperate with authorities.).
5
SlinkySlekker 6 days ago +4
I hope every boy involved suffers the maximum penalty for each incident, and that their families become known for raising predators. Society needs to come down hard on the exploitation of girls, and the boys and men who feel entitled to do it. This country is devolving into pure degeneracy, because boys and men are not held accountable.
4
ry-yo Apr 3, 2026 +31
no mention of where it is, but I'm assuming it's not near Lake Zurich in Switzerland
31
SpicyPandazor Apr 3, 2026 +49
This is a town in Illinois, USA.
49
Proud_Tie Apr 3, 2026 +26
if you heard about the kid who punched a nazi at a school protest against ICE - it's that school.
26
Fried_puri Apr 4, 2026 +38
Anyone feels like the girl's dad's response is...a little weird? > “Honestly, I have no ill will towards that young man or his family. Kids are kids, and they do dumb things just like adults do. So, especially at that age, they don’t comprehend the severity of what they do,” Daniels said. If that had been my teen daughter and someone had been making and spreading n*** images of her to her classmates, teenage boy or not that's not the reaction I would have.
38
SlinkySlekker 6 days ago +7
He devalued his own daughter, and is making excuses for the boy who exploited her. Gee….I wonder who he voted for in the last presidential election.
7
nyg1219 Apr 4, 2026 +25
I think it's a very reasonable comment. And he's right. He could be seething with rage on the inside, but also knows that nothing will change or be fixed by him screaming or attacking the kid.
25
[deleted] Apr 4, 2026 +7
[removed]
7
[deleted] Apr 4, 2026 +7
[removed]
7
RosieQParker Apr 3, 2026 +16
Prosecute the perpetrators and the AI slop merchants that enabled them.
16
nowitscometothis Apr 3, 2026 +7
Ya, I don’t get why the AI company isn’t having to answer for how it was used to create child p***
7
ubix Apr 3, 2026 +2
Trump would just pardon them eventually
2
ripyourlungsdave Apr 4, 2026 +3
"The police have said not to ask any questions about why they're keeping all the pictures at Dave's house."
3
biscuitarse Apr 3, 2026 +5
But tight regulations are bad, eh?
5
Malaix Apr 3, 2026 +6
I mean child p*** does seem to be the crowning achievement of generative image AI. So its not exactly shocking. Wasn't Grok like the biggest engine for CP for awhile there? Is it still?
6
twistedstance Apr 3, 2026 +5
I can easily imagine a kid doing this for an edgy laugh with how available this tech is now. It’s a scary time to be a kid.
5
Shiftymennoknight Apr 4, 2026 +4
How about we start imprisoning the AI CEOs? I bet that would put an immediate stop to this shit
4
MasterCheef117 Apr 4, 2026 +2
My high school back in the news again? Damn LZ has gotten wild.
2
thepianoman456 6 days ago +2
Let me guess… the AI company will face no charges.
2
snowshoeBBQ Apr 3, 2026 +3
Was it the same kid who got punched in the face?
3
goomarbitch Apr 4, 2026 +9
Genuinely think we need to make harsh laws around this and consider it CSAM, send any distributors to juvie until 18 at minimum. There needs to be a consistent, hardline message that this *is* illegal. They can do some kind of program for sex offenders and be threatened with the label for five years, something like that.  Maybe this sounds crazy, but we NEVER punish people who do this kind of thing sufficiently, in my opinion. Why is it that sex crimes end up with such low sentences compared to, like, having a gram of weed? Honestly I think this Epstein shit shows there’s a top down problem in this country with pedos and degens who have an expectation they will never be punished.  F*** that, no more ‘boys will be boys’ this poor 13 year old girl got punished before they did & th school clearly tried to sweep it all under the rug. My dad would have backed me up 100% if I punched a boy who did such vile shit to me, he’d have to be held back himself lol
9
itcheyness Apr 4, 2026 +5
It actually is already considered CSAM under current US laws. Basically, if you can't tell when looking whether it's real or not, it's supposed to automatically be considered CSAM.
5
bwmat Apr 4, 2026 +1
Is that something relatively new?
1
itcheyness Apr 4, 2026 +2
No, it's been the definition for ages as far as I understand it.
2
n33dwat3r 6 days ago +6
Because police are mostly bullies and abusers themselves. Look at their reported domestic violence rates and understand that's a fraction. Even if it's illegal they selectively decide who to arrest and charge.
6
bwmat Apr 4, 2026 +3
I feel like making this something that labels you a sex offender would just water it down like the claims that public urination can cause it
3
Toomanyeastereggs Apr 3, 2026 +3
AI is so beneficial for society /s. F*** you Altman. I hope history judges you harshly for the evil that you have foisted onto our planet.
3
zzztoken Apr 4, 2026 +2
Oh no it’s almost like these companies were told this exact thing would happen!
2
WasteBinStuff Apr 3, 2026 +3
Narrator: This is far more than merely disturbing, on several different levels.
3
Epsilon_Centauri 6 days ago +1
So this is going to keep happening every week because our leadership refuses to regulate AI
1
SlinkySlekker 6 days ago +4
It’s also a parenting problem, because boys are becoming more predatory, earlier. This keeps happening. It’s always boys targeting girls, to exploit them and ruin their lives, simply for existing. “The number of victims is more than 40 girls and ranges from high school to fifth-grade.” When you have a country that elects a rapist and pedophile, and a political party who puts men in power regardless of credible r*** and pedophilia accusations, you create a perfect storm of female exploitation and zero accountability.
4
Vegetable-Error-2068 6 days ago +1
You can be rest assured that any scandal at an American high school ends with the perpetrators being rewarded and the victim punished.
1
← Back to Board