· 178 comments · Save ·
News & Current Events Mar 27, 2026 at 7:56 AM

EU votes to ban AI 'nudifier' apps after explicit deepfake outrage

Posted by clamorous_owle


EU votes to ban AI 'nudifier' apps after explicit deepfake outrage
RFI
EU votes to ban AI 'nudifier' apps after explicit deepfake outrage
The European Parliament on Thursday approved a ban on artificial intelligence tools that generate sexualised deepfakes without consent, as lawmakers also voted to delay key parts of the EU’s landmark…

🚩 Report this post

178 Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
someocculthand Mar 27, 2026 +542
I wonder how they're going to attempt to monitor this. Does this not comprise all AI image generators, many of which can be locally run with basically any modern gaming GPU?
542
Filias9 Mar 27, 2026 +393
You can do locally a lot illegal stuffs. You can make new Disney movie. With Disney logos and everything on your computer. But you have problem when you share it or even sell it. I would say that's the same with ai nudificaton. Ai can do id. You can run the models. But if you share it, it's your problem.
393
CoffeeSubstantial851 Mar 27, 2026 +164
AI nudify apps are businesses and hosting them/processing their payments etc will be illegal. AI models used in other graphics software packages will need to not be able to create ANYTHING that could be considered a n*** human of any age. For example how is Adobe PS different from a nudify app if they allow you to prompt a n*** version of someone via FireFly? Its not just sharing... its selling non-consensual deep fakes as a service.
164
Thog78 Mar 27, 2026 +36
They will put a watchdog ai that monitors the requests and flags the inappropriate ones to refuse. It won't be 100% and people who want to find workarounds will manage, but it will stop most of the trend, and the ones that break the rules are gonna be easier to prosecute so they better keep their "creations" private, which at this point does little harm.
36
bostwickenator Mar 27, 2026 +4
Selling is sharing.
4
Pleasant_Narwhal_350 Mar 28, 2026 +3
My issue with the EU right now is that they might actually attempt to ban everyone's local AI models of any sort because they *might* be used to create deep fakes. Evidence: look at how they've been trying to ban end-to-end encryption for everyone because it *might* be used to hide crimes. https://www.europol.europa.eu/media-press/newsroom/news/european-police-chiefs-call-for-industry-and-governments-to-take-action-against-end-to-end-encryption-roll-out
3
unematti Mar 27, 2026 +1
Aye, basically you'll need to provide your model for free if you want it to do noods for people. I'm not exactly sure... But generally, companies like to make money...? Even if they release a nudifier version of their model free, they may get into trouble since it's the same model, the news will not put too much details in the headlines, and you might also catch some serious investigation. I imagine they won't develop these models much after the legislation is out, and that the available ones are gonna be less and less up to date.
1
Cr4shK00l Mar 27, 2026 -34
You dont understand how neural networks work if you think someone sat down and went 'alright, let's train this model so it can make p***'. gen ai is able to do n**** because its able to do humans. if you dont want it to be able to make n**** then you'll have to remove its human making capabilities too, making it worthless for art.
-34
MadeyesNL Mar 27, 2026 +24
I assumed those models ingested large amounts of n*** imagery/p*** as training data in order to be able to generate it. Does it work differently?
24
moofunk Mar 27, 2026 +13
You can do that, but you can also mix several disparate trained things into one synthesized image. As such, you can "invent" p***, even if there is no p*** in the model, by the model hallucinating it. That's why diffusion models are interesting/dangerous. The only remedy there is that you can finetune some things on top of a model to explicitly deny drawing certain things before publishing it, like putting blank areas on top of nudity, but if you're crafty, that stuff can be removed again. But then, the advent of effective edit models within the last year, like Flux Klein makes it even more trivial to just combine actual p*** images with whatever else you want in the image. That means the model just has to be good at combining things or replacing things in an existing image and then you provide the material yourself. You don't need to train anything at all, and the model doesn't need to be trained on any questionable material. The short answer to this is that you can't really stop anyone from making lewd imagery of anyone with a few days of figuring out how it works, and the models for editing are becoming more and more effective. You need a bit of software, some freely available models and a large gaming GPU and off you go. The cost is your time and electricity bill.
13
Cr4shK00l Mar 27, 2026 +2
if they did then it was only indirectly, the end goal was for gen AI to be able to generate picture perfect humans and that includes naked humans indirectly. even if you remove all the p*** from the data set if you fed it enough PG data it will be able to do p*** anyways. there's nothing 'smart' about AI, it's just a probabilistic machine outputting the highest matching next probability, pixel by pixel.
2
CoffeeSubstantial851 Mar 27, 2026 +1
Nope. If you think they didn't use all the p*** they could find you're naive af.
1
Copatus Mar 27, 2026 +4
>if you think someone sat down and went 'alright, let's train this model so it can make p***'. There are many online communities dedicated to doing exactly that. You have no idea what you're talking about lol
4
Cr4shK00l Mar 27, 2026 +1
Except we're talking about pre-training models not the models that are already out there.
1
CoffeeSubstantial851 Mar 27, 2026 +1
In fact its most of what those communities are about. For a long time most of the models on huggingface for image gen were to do with celebrities and p***. I wonder why?
1
Xsiah Mar 27, 2026 +3
I have had Gemini refuse requests before. There is a difference between humans and p*** - we can detect it, so an LLM can be trained to detect it too. Services can refuse prompts that ask it to do something that it's not supposed to, and they can analyze the output before sending it back to determine if it's going to be something it's not supposed to show. It won't be perfect right away, but it doesn't seem like it would be impossible.
3
Cr4shK00l Mar 27, 2026 +1
Yes, those are the security measures that are being taken by companies that own the models and provide the inference but it will not be possible for local models
1
Xsiah Mar 27, 2026 +3
Sure, but we don't really enforce what people do where regulatory bodies can't see. Your 3d printer can print a gun. You can roll dice and take bets from your friends on the outcome. You can drink the raw milk from your cow. But you can't sell or operate those things publicly - that's basically far as public policy can ever go.
3
ExoticWeapon Mar 27, 2026 +3
You clearly don’t understand anything about anything lmao.
3
CreepHost Mar 27, 2026 +2
"AI making Human Art" is an oxymoron lmao
2
Cr4shK00l Mar 27, 2026 -8
it's a tool meant for artists, not a replacement.
-8
cxmmxc Mar 27, 2026 +6
You don't understand how people work if you think anyone will hire an artist for a commission after everyone has access to an ai to do it for them. How about you show off all your art that you've made, and tell me how it's improved your process.
6
CoffeeSubstantial851 Mar 27, 2026 +4
Hes a moron. If it was a tool for artists, artists wouldn't be getting laid off f****** everywhere.
4
Cr4shK00l Mar 27, 2026 -4
only shitty artists are getting laid off.
-4
Kakkoister Mar 27, 2026 +1
>making it worthless for art. It already is, because it's not producing art, it's producing imitations. It has no personal feelings or lived experiences, it doesn't understand how to actually craft art, to draw, etc.. it's just mixing around data based on learned patterns from other people's actual art. >gen ai is able to do n**** because its able to do humans. Wrong. It's able to do n**** because human genitals and nipples are in training data, and especially lots of imagery of sexual acts. Without it being included, it can still "do humans", just not n*** ones.
1
Cr4shK00l Mar 27, 2026 -1
it gen AI were capable of only replicating what's in its training data then it wouldn't be capable of generating completely new art like it has already proved it can do. it's statistical possibility not pattern matching.
-1
Hour_Baby_3428 Mar 28, 2026 +8
Feel free to change my mind, but if these ai pictures are strictly for yourself and never get shared then I don’t exactly see an issue with them? It’s certainly weird and morally questionable but like, I can draw people n*** too or imagine them in my head. It would only become an issue once they go public and possibly humiliate the actual person. If I write a bunch of insults in my notebook that I don’t show anyone then I haven’t insulted anyone, have I?
8
SqueezerOfFarts Mar 28, 2026
You made Jesus sad
0
cxmmxc Mar 27, 2026 +33
Can't say how happy I am to see the discourse around this has turned into common sense. Not a year ago the topmost comments in all deepfake threads were "waaah you can't ban software, they can't stop me from generating images so they can't enforce this, so why make it illegal, it won't stop all people from making deepfakes." Which, if you think about it for more than two seconds, is pretty much the same as saying "making murder illegal won't stop all murders so why even bother." Guess we should've been fine with a harmful thing because we'll never stop it 100% from happening.
33
qtx Mar 27, 2026 +6
> if you think about it for more than two seconds Most people do not think more than two seconds. They have their initial thought and that's it, they stop thinking about it more. Whatever their initial thought was is their stance on that subject. They never go to the second step, the part where they dissect their initial thought and look at it from different sides or question if that initial thought made any sense. It's depressing to witness that this has become the norm these days.
6
BialyKrytyk Mar 27, 2026 +9
Exactly, that's pretty much how it is. AI is a tool, just as a Chef Knife. The latter needs the ability to cut meat by design, and the fact that we are all made of it should not force all knife manufacturers to only make them blunt. If someone is making deepfake p*** and posting it online, prosecute them for it, the same as you would for someone using a knife to stab someone.
9
sandwichman7896 Mar 27, 2026 +2
In this scenario, the chef would be murdering the equivalent wax figures made to look like people. It’s also interesting that there is zero conversation about how humans refuse to come to terms with pleasure and their own reproductive process
2
ArmNo7463 Mar 27, 2026 +5
I'm not sure if anyone is advocating banning AI n**** of generic/non-existent people. It's when used to capture the likeness of actual people, that it's harmful.
5
sandwichman7896 Mar 27, 2026 -1
Sounds a lot like the arguments against the Tea app
-1
Splenduit Mar 27, 2026 +7
It would indeed be very difficult to regulate those who generate such content locally. The only option would be to ban websites from sharing dora, lora, checkpoints and other training data for adult content. There are only a limited number of websites that host AIs that generate this content for regular users, a lot of times they are paid too, and the average user is unlikely to waste their time and money setting up and learning how to run an AI locally.
7
ArmNo7463 Mar 27, 2026 +3
If people do it locally, and don't share it. Is it such a big problem? It's when it's shared/published you should throw the book at them in my opinion. Which makes Grok an interesting case. Because you could do deepfakes on the grok website, and keep it largely private. - I have no idea why the majority of it is using public X posts. I think degenerates flew too close to the sun on that one. If they used private session, it likely wouldn't have blown up this much.
3
dimwalker Mar 27, 2026 +2
You would need to really go after them - those websites would still function as t****** trackers do. They don't store or distribute anything, just mention illegal content and tell you where to get a t******. I imagine that's what would happen to those models - decentralized hosting. Not to mention you can still train your own model. It would be a shitty one, but you can.
2
WingerRules Mar 27, 2026 +5
I think in the long run we're going to end up with AI built into OS's monitoring what you do for violations & threats, including what you do with locally run AI. Theres so many people who stand to benefit from it happening: Data & OS companies gathering as much data as possible from OS users to sell Law Enforcement agencies Legislatures and local governments will be able to ensure their laws are followed even before Law Enforcement gets involved Companies monitoring their employees IP holders such as software, movie, & game companies Monitoring for pirating or IP infringing AI generated work Anti cheat in games Early threat detection by secret security services As a security layer for users who get their computers stolen or compromised Governments wanting to track dissidents and fringe/controversial groups Schools & Universities monitoring for cheaters and people lazily using AI to do their homework. etc
5
tb5841 Mar 27, 2026 +5
That doesn't really work with any free, open source OS.
5
Matshelge Mar 27, 2026 +3
As you say, a problem when you share or sell them. This is what the law is going at wrong. They want to make the technology illigal.
3
BrainOnBlue Mar 27, 2026 +1
I mean, to be clear, making that example Disney movie *is* copyright and/or trademark infringement (depending on the specific content). It's illegal whether or not you share it. But it's hard to go after people that don't get caught.
1
unematti Mar 27, 2026 -1
That "if you share it, it's your problem" is how the companies wash their hands of it. It's a good thing they need to look out or get banned
-1
hegbork Mar 27, 2026 +65
You can make bombs at home too, doesn't mean it's a good idea to make it legal to sell bombs at the supermarket.
65
vctrmldrw Mar 27, 2026 +35
Someone posts a nudified image. They get traced. They get prosecuted for using a nudifier, because they've been banned. Just like how heroin possession has been banned. It's enforced by prosecuting people who possess it.
35
ArmNo7463 Mar 27, 2026 +6
I'd argue if you post the image, you're the drug dealer in that example. If you generate it at home, and never share it. That's closer to a drug user. (Such Heroine user is also unlikely to be prosecuted.)
6
vctrmldrw Mar 27, 2026 +3
Whether or not a prosecution is likely is not relevant to my point. A lot of people seem to assume that a ban is an attempt to eliminate the existence of a thing. I'm simply pointing out that it isn't. A ban is instead the criminalisation of the supply, use or possession of a thing. The thing still exists. It's quite possibly still readily available. A person simply runs the risk of being prosecuted if they're caught with it, or using it, or supplying it. That's likely to put a large chunk of the population off using it.
3
IntelArtiGen Mar 27, 2026 +12
> Does this not comprise all AI image generators, many of which can be locally run with basically any modern gaming GPU? Many generators are limited and can't do this, but even when they're not they probably won't care much as long as you don't share what you generate if it's local. Technically you don't even need an "AI" to do this so the problem is more what you do than the tool you use.
12
Valtremors Mar 27, 2026 +10
Sometimes regulation does not mean that the thing is shot behind the sauna and it stops existing. Sometimes regulations are made beforehand to work as/with a framework when legal issues come up. It is a lot easier to start planning from there than plan when it finally becomes a major issue that needs to be dealt with. Stealing is illegal. However not every case is prosecuted or dealt with.
10
tlst9999 Mar 27, 2026 +4
Same with other illegal online stuff I suppose. No one's going to search your PC without a warrant. But don't be dumb enough to post it on twitter.
4
jawstrock Mar 27, 2026 +2
Go after platforms that allow it to be hosted or sent through their messenging apps. Make it the problem of the companies that unleashed this hell.
2
unematti Mar 27, 2026 +2
You can run a Facebook equivalent at home with full chat applications with e2e encryption. So yeah home run stuff won't technically be affected, but isn't it still good that they stop the big companies doing it for the lazy AHs? Yes, you can set it up home, but most people doing this will not go through the learning or spend the money. You know what, I speculate further that those who have the opportunity to do this at home and the knowledge have a much lower percentage of them wanting to do this. Modern gpu, sure... But not on consoles. That also cuts out a huge fraction of people.
2
glasgowgeg Mar 27, 2026 +2
There are laws against the creation of CSAM, that doesn't make cameras illegal. Same logic applies, if caught doing it, you are prosecuted.
2
YF422 Mar 27, 2026 +1
Likely they aren't going to go after every single thing as there's too big a grey area on so many apps where it's possible to abuse but not intended for such software to be used that way, it's probably more likely to target Grok and Musk specifically for not only going out of their way to make it widespread but encourage such degenerate behaviour. Basically trying to make sure they can shut down those deliberately trying to commercialise such a function in Europe at the very least.
1
Mr_ToDo Mar 27, 2026 +1
I'd guess that it's down the lines of other "could be used for bad stuff, but also can be used for other things" rules You say that services that are are being pushed primarily for said uses and/or that said services have to put some level of protection to stop those services Not the first time we've had to deal with things like this. You set things up so it's harder for people to casually do it(and have something to hit them with if they do). Just like a masterlock on a shed, it might not be great protection but it'll stop a good number of thefts just because it makes it a bit harder
1
TheLuminary Mar 27, 2026 +1
Making it illegal is the first step in charging the really problematic people. Not to stop every person from doing it.
1
Flatus_Diabolic Mar 27, 2026 +1
Westerners tend to think about the individual, not the collective whole - if this is a problem, how can we stop individuals from doing it? I don’t think laws really work that way - we have laws for murder, but murder still happens. Europeans have gun control laws on top of their murder laws, and still people get shot - although much less than in America - so the laws work, but at a societal not an individual level. So point of laws is to reduce undesirable behaviour in your society, not to expect it to stop completely. One incident of law breaking doesn’t make the law useless. You were speculating on tech changes to try to make nudefaking impossible - eliminating the problem, rather than just mitigating it - and then rightly concluding such a thing will never work. Although the EU has a history of really stupid tech legislation, my guess is they won’t try anything so impractical. Instead, they’ll just regulate the platforms - preventing mass-market access will drive the problem way down, but not eliminate it. It also scales well because you’re regulating a few hundred companies and giving you easy access to takedown orders and that kind of thing. If you make it about the individuals, then you’re going after *production* and that’s virtually impossible, since anyone with a $50 graphics card can generate this stuff, and then suddenly you’re running around having to identify individuals and prosecuting them, which will quickly prove how toothless you are because a lot of the perpetrators will be in other jurisdictions or good enough at covering their tracks that all you can do is go back to the distribution platform and ask them to shut it down anyway.
1
CoffeeSubstantial851 Mar 27, 2026 -22
It will end up banned as a technology similar to human cloning. AI is an existential threat to societal cohesion and over time it will see more and more regulations eventually leading to an outright ban+jail time.
-22
cursortoxyz Mar 27, 2026 +30
You can’t do human cloning in your kitchen, but you can run AI models even on your phone. It can’t be banned.
30
lamBerticus Mar 27, 2026 -16
But you can f*** everyone up using and sharing these.
-16
cursortoxyz Mar 27, 2026 +17
I’m not defending this or other malicious uses, those should be regulated. I’m only saying that the underlying technology can’t be banned.
17
tegat Mar 27, 2026 +3
You can ban anything. Efficiency of the ban differ. We banned murder in pretty much every country on Earth, yet is is still happening. The point is that it will be far less prevalent that @grok nudify this person (or any other service).
3
CoffeeSubstantial851 Mar 27, 2026 +2
I think you don't understand what the term ban means. Banning something does not mean literally no one is using it and it magically disappears from existence. All it means is that the laws around it are so strict and enforcement is so punishing as to make its practical every day use very very difficult.
2
rimeswithburple Mar 27, 2026 +1
Didn't Sony have some kind of IR algorithm on their cameras that showed n*** people through their clothes? That ended up being banned didn't it? Am I remembering that right? It has been some time ago.
1
Vizth Mar 27, 2026 +5
banned no, if i remember right they voluntarily recalled it. It was hardware IR filtering not a software feature.
5
shokalion Mar 27, 2026 +3
It wasn't anything approaching an algorithm. It was just a camera mode operated by a mechanical switch which did two things: * Turn on an IR light source on the front of the camera * Mechanically remove the IR filter from the lens stack inside the camera The intention was to make the camera able to take pictures in extremely low even completely dark light levels and [it did work very well](https://i.ytimg.com/vi/Eklh46UOW50/maxresdefault.jpg). The unintended consequence of this was that some materials (that a dress, for instance, might be made of) are more transparent to IR light than other things (such as skin) are, and the result was it turned certain clothing ghostly while leaving the person underneath visible.
3
cursortoxyz Mar 27, 2026 +1
Happy cake day! This sounds interesting. I looked it up, it was a hardware feature that "disabled" the internal IR filter for Night Mode. From what I found it could be abused in certain scenarios and Sony only disabled the feature when there was too much light. [https://tech.yahoo.com/cameras/articles/fact-check-posts-claim-sonys-230000700.html](https://tech.yahoo.com/cameras/articles/fact-check-posts-claim-sonys-230000700.html)
1
CoffeeSubstantial851 Mar 27, 2026 -14
Wrong. They will literally monitor your phone and all software on it. Your Internet use will be monitored by ID and logged. If this is not done you will see societal collapse from the collapse of a shared reality and economic collapse caused by white collar worker displacement. Governments have every incentive to control the internet, computation and your access to it. You can thank AI and the AI industry for that.
-14
Marquesas Mar 27, 2026 +13
Jesus christ you're kilometers deep in the doomer mindset. Have you considered getting off the internet
13
JeffSergeant Mar 27, 2026 +9
"Come at me bro"- Linux user.
9
ArawynD Mar 27, 2026 +2
Yeah because they weren't doing that before AI. Were the age verification rules also because of AI?
2
CoffeeSubstantial851 Mar 27, 2026
Yes they are connected.
0
Logitech4873 Mar 28, 2026 +1
Nah, that's an impossible task.
1
someocculthand Mar 27, 2026 +13
Locally run offline applications are already everywhere, that cat isn't going back in the box. I'm not against regulation, mind, but targeting run-of-the-mill AI apps seems really impractical. Criminalizing sharing non-consensual AI fakes would seem like the more realistic alternative, combined with more active approach to having them taken down when they appear, as well as making sharing platforms more responsible for their turf.
13
jonkoeson Mar 27, 2026 +3
Almost certainly already illegal, certainly if it can be shown to have had the intention to harass.
3
EmbarrassedHelp Mar 27, 2026 -1
I would that the legislation was written to target businesses that are explicitly offering such services. But this statement seems to put that into question, and raises the risk of general open source AI models being targeted: > Systems with safeguards will not be affected. Lawmakers said AI tools with “effective safety measures preventing users from creating such images” would remain allowed.
-1
Nervous-Shower4870 Mar 27, 2026 +100
Waiting for the “EU is restricting innovation” comments.
100
oldsecondhand Mar 27, 2026 +12
"We can't let the Chinese win the nudifying race!"
12
Owlstorm Mar 27, 2026 -85
The only way to actually enforce this would be to lock down every home computer to such an extent that it can only run government-approved programs. That's basically insane, and would fall under "EU is restricting innovation", yeah. Some kind of selective enforcement against the people making money selling these things is the best we can hope for.
-85
NTFRMERTH Mar 27, 2026 +9
Do you have the same views when it comes to banning CP?
9
H4llifax Mar 27, 2026 +59
That argument makes no sense. There are also rules for gambling sites/apps that work, and I don't see how your argument is any different for those.
59
Foxkilt Mar 27, 2026 +12
Computer gambling is something that, intrinsically, cannot be done locally 
12
H4llifax Mar 27, 2026 +36
You cannot restrict what people can build themselves, but you can restrict what people can sell.
36
Cartina Mar 27, 2026 -9
Nor can the average person run their own AI either. Some offenders will of course spend the time to look it up, but part of the problem is one-click solutions. A small threshold of entry makes a big difference for a vast majority. Installing and maintaining their own version of an nudifier would be "too much" for most
-9
ChurchOfTheNewEpoch Mar 27, 2026 +15
You can easily run AI models locally. I have a small LLM running right now using ollama. (basically a one click solution) I have also used image gen models locally on my PC. My PC is nothing special. Really not that hard to figure out.
15
Antroplasm Mar 27, 2026 +1
I agree, down with EuroCorp my fellow disciple!
1
ChurchOfTheNewEpoch Mar 27, 2026 +2
*The brave hide behind technology. The stupid hide from it. But the clever have technology, and hide it.* From the Book of Cataclysm
2
darkmage2015 Mar 27, 2026 +2
I mean you say that, however, even now my gpu driver offers an optional ai bundle which downloads and sets up tools including image generators, so it really is not hard, though it does for now require at least a decently powerful pc though as time passes that barrier becomes smaller.
2
Owlstorm Mar 27, 2026 -4
Like somebody else mentioned, online gambling is inherently multiplayer with a real business behind it. This is more like the government deciding to ban solitaire. A program that doesn't need internet or a bank account or any central infrastructure. They can start with telling operating systems and app stores not to include it out-of-the-box, which would probably work ok. The hard part is that the files for solitaire are already on millions of computers and can be copied forever without the EU's involvement. If you imagine that all computers everywhere are purged of those files *somehow*, you then need to stop people writing new solitaire programs and more broadly kill general purpose computing.
-4
H4llifax Mar 27, 2026 +13
What kind of argument is that, child p*** and sharing copyrighted files is also illegal, despite nobody being able to purge all those files. These laws will most likely say: 1. You aren't allowed to make n*** generators your business. 2. You aren't allowed to share or host generated n****. 3. You aren't allowed to distribute n*** generators on your app stores. 4. If your business is in image generation, you have to put in safeguards so they aren't used to generate n****. Something like that. Notice how nowhere in there is anyone even attempting to do anything to stuff on your computer.
13
theodordiaconu Mar 27, 2026 -3
I think this is because with a 400€ gpu you can do this at home with ease. Sure it’s not an ‘app’ but you can get the same functionality
-3
Think_Wing_1357 Mar 27, 2026 +5
Look man, murder is illegal, murdering still happens every year. Does that mean it's not worth outlawing it?
5
Nervous-Shower4870 Mar 27, 2026 +12
And there you are! Quicker than I expected, I must say. By your logic, they shouldn’t ban child p********** either since people with access to children can just click pictures and store them on their local computers. No law is enforceable completely without 100% surveillance, but we still have laws and still don’t have 100% surveillance because the objective of laws is to deter people and make it harder to commit a crime. If we go by your definition of innovation, I’d rather the world stagnates in a rut for all eternity. I may come off as condescending, but I don’t since your statements were rather pedantic and obtuse.
12
WTFwhatthehell Mar 27, 2026 +5
There absolutely 100% have been a bunch of attempts to add government controlled "clipper chip" type systems to **all** personal computers to allow constant government surveillance of every click using CP as a justification.
5
Nervous-Shower4870 Mar 27, 2026 +5
That does not seem to pertain to my point, but that is a good-to-know fact. Which government, specifically, though?
5
WTFwhatthehell Mar 27, 2026 +5
The one I named is an example. The clipper chip was designed to give a backdoor to every personal computer. China has the "green dam" software required to be installed on every machine 
5
MrHaxx1 Mar 27, 2026 +2
>China has the "green dam" software required to be installed on every machine 16 years ago lmao https://en.wikipedia.org/wiki/Green_Dam_Youth_Escort
2
WTFwhatthehell Mar 27, 2026 +3
More recent stuff includes JingWang app
3
Nervous-Shower4870 Mar 27, 2026 -1
Cool
-1
Vizth Mar 27, 2026 +23
I don't see this impacting anything hosted outside the eu? If your in it use at your won risk but I don't see this impacting a single service hosted in a non eu country. At worst you'll need a vpn to access it.
23
kreton1 Mar 27, 2026 +26
If you explicitly market for the EU, EU law applies.
26
Raiden29o9 Mar 27, 2026 +35
Ya glad to see some countries looking into this, especially After seeing what people did and continue to try to do with Grok… even discounting my dislike of gen AI it was just f****** disgusting and creepy what this shit enabled
35
SpiroG Mar 27, 2026 +45
Cool, cool. Those apps are basically scum and I'd be happy if the resources wasted on AI are used for less intrusive purposes. HOWEVER, can we also look at banning voice deepfakes? Sure, a n*** deepfake is obviously a massive privacy intrusion and extremely embarrassing and should count as a crime... But faking someone's voice is exactly the same, imo. In my humble opinion, whether I'm violated visually (n*** pics) or audibly (my voice is used to get me in trouble or for other nefarious purposes) it's exactly the same thing. Sharing either of these should count as a violation of privacy, identity theft, and imo assault or harassment and land anyone in jail for a while.
45
azmarteal Mar 27, 2026 +6
>HOWEVER, can we also look at banning voice deepfakes? Oh, we can for sure, but it is like banning p****** games or banning mp3 music that was obtained illegally - a.k.a. pointless All it takes to create a deepfake voice is 5 seconds sample and 1 mouse click in ComfyUi. You can make the whole audiobooks with that >Sharing either of these should count as a violation of privacy, identity theft, and imo assault or harassment You are a bit off with qualifications here (you could throw in assisination, murder, r*** and genocide aswell while you on it, why not lol) - but sharing n*** pictures of a person was always illegal anyway, regardless of a method they were created
6
CoffeeSubstantial851 Mar 27, 2026 +10
We have laws against murder. But, people still do it. So, I guess we should get rid of those laws then huh?
10
Microdose81 Mar 27, 2026 +7
No, we should enforce the murder laws stricter.
7
azmarteal Mar 27, 2026 +1
You didn't understand what I have written?🤔 Then go read the first part again
1
welshwelsh Mar 27, 2026 -5
The difference is that murder is bad, while generating images on a computer isn't.
-5
ShaqShoes Mar 27, 2026 +8
No one said the issue is "generating images on a computer"- it's a specific type of image that is at issue. Is your argument that creating deepfake p********** of people without their consent is ok?
8
Nibbled92 Mar 27, 2026 +30
That's alright. I already seen dozens of n*** pictures of Britney Spears and Christina Aguilera on the early 2000's
30
ziroux Mar 27, 2026 +7
Loading slowly on dialup, the bottom part of the picture could be *very* different, so no masturbation until complete
7
dick_blanketfort Mar 27, 2026 +3
If you downloaded the compressed .jpeg.exe version it loaded much faster.
3
Cicer Mar 27, 2026 +1
Dialup in the 2000’s? Ooph. Have at DSL of at least 256Mb since 1994 and I know it was available before that too. 
1
ziroux Mar 27, 2026 +3
Ah must be living in a nice area Edit: i got 33600bps but some friends got 54k. Perks of living in a shit hole, laugh, etc
3
AdFeeling842 Mar 27, 2026 +53
a lot of pervy listnookors acting like this tech is no big deal. who cares if some teenager gets their life wrecked? totally harmless 'tech' except that kids get seriously messed up from this and some deal with it for years and some even end up killing themselves
53
Horikyou Mar 27, 2026 +55
Think people are more concerned on how these things are going to be enforced in the future. I don't want government spyware on everything I own. Protect the children has always been a propaganda slogan to incorporate very intrusive measures of control.
55
ComfortableExotic646 Mar 27, 2026 +19
People on listnook used to care about privacy. Now, every website wants it's own version of your PII and people on listnook are cheering because supposedly children are going to be kept safe... somehow. They haven't figured out the last part yet, but it's true because they say it is.
19
Logitech4873 Mar 28, 2026 +1
What's PII?
1
CoffeeSubstantial851 Mar 27, 2026 -4
The problem with your thought process is its a slippery slope argument and can be applied to anything. At any point while trying to solve a problem you can say this solution could lead to X therefore don't do any solutions. Well this is the real world, real people are being harmed and society will need to address those harms before they actually get out of control. Beyond that we have only had computers in everyone's homes and pockets for a couple of decades and the internet wasn't widespread until the late 90s/ early 2000s. These sectors have gone largely unregulated and now that we are seeing real tangible problems arise from that.... shits going to have to change.
-4
Old_Leopard1844 Mar 28, 2026 +2
How many slippery slopes you need to fall down on to stop parroting "but is slippery slope" to shut off pushback?
2
AuryGlenz Mar 28, 2026 +1
How many teenagers will have real n**** leaked and now they’ll be able to claim it’s just AI? Seems like the real thing would be way more harmful than anything fake. The math isn’t as simple as you make it out to be.
1
[deleted] Mar 27, 2026 -27
[removed]
-27
Admirable-Land111 Mar 27, 2026 +17
Isn't the distribution of AI generated n*** images already illegal under the revenge p*** laws? You were allowed to create but not distribute and this law makes it harder to create. From a quick glance, it seems like most countries are already moving in that direction. I truly don't see why this is a bad thing.
17
CoffeeSubstantial851 Mar 27, 2026 +3
Its not a bad thing. These people just want AI because they hate artists.
3
Bennely Mar 27, 2026 +5
You aren't wrong, morally or legally, but there is a grey area. The "Take it Down" act, recently released, does put guardrails around the creation and distribution of deepfake AI, but it doesn't stop it entirely. The nuance that's missed here is that "AI" isn't and won't be banned, "deepfake p***" already is. But even then, AI-enabled p*** isn't entirely illegal either. AI is our modem-day dynamite: an amazing tool that can and will help to advance human society, but also a weapon of mass destruction.
5
Hydronum Mar 27, 2026 +12
Okay, but we have regulations on what can't be served out of a kitchen. Just like we have AI apps, but restrictions on some outputs. Your own argument is c***, get a better one.
12
Happy_Feet333 Mar 27, 2026 -6
*Okay, but we have regulations on what can't be served out of a kitchen.* That's exactly right. And that's what the focus should be on. Not banning AI apps. 
-6
Hydronum Mar 27, 2026 +3
AI is the kitchen, the Aps are the plated output.
3
SimoneNonvelodico Mar 27, 2026 +3
Nah, the output of the apps is the output. The apps really are usually just the thinnest wrapper around the AI. Like maybe instead of having to write the entire prompt, a "nudifier app" would be just one where you paste a photo and it writes the prompt for you. To be clear I don't mind if those specifically get removed, but even something like Grok on Twitter is far more general than it, though it can be used that way. You can ask that it refuses these requests and it mostly will. The potential itself though is still latent in any image generation model.
3
Happy_Feet333 Mar 27, 2026 -4
The app is your interface with the AI. It's basically your arms and hands.
-4
Hydronum Mar 27, 2026 +2
It is the prepared and plated outputs. The Kitchen would be the training Algos. You aren't at the kitchen.
2
Happy_Feet333 Mar 27, 2026
The plates of food are the n*** pictures themselves.  Geez.
0
Hydronum Mar 27, 2026 +4
No mate. The Aps are designed specific flavours of AI interaction tailored to a specific want or desire and served to be visually distinct when in front of you.
4
Schmarsten1306 Mar 27, 2026 +7
Idk what kind of kitchen you got or if I'm just being omega stupid right now, but I don't think I can cook up poison in my kitchen
7
[deleted] Mar 27, 2026
[removed]
0
FeistyClam Mar 27, 2026
No, the normal list of things people imagine in a kitchen just doesn't include poison ingredients, and the person is asking for an example. To be honest, unless your kitchen serves pufferfish or something, I'm struggling to come up with a cookable poison without dipping into the cleaning chemicals- so I'm a little lost too. You're not being baited, they just want an example to try and understand your point. And you could name one for them without giving a recipe and break no rules. 
0
duperfastjellyfish Mar 27, 2026
The restaurants that offer poison on the menu should be banned, yes.
0
SimoneNonvelodico Mar 27, 2026 -9
Right, without AI deepfake image generators suddenly bullying will vanish and all kids will get along instead. Since as we know, the entire concept of it was born suddenly a few years ago when Midjourney came out. The tools change with the times but the dynamics are social, not technological. Stomping this doesn't somehow save the kids, there'll be just another way bullies use to humiliate their victims. You can keep playing that whack-a-mole forever and never get anywhere if all you focus on is just whatever's fashionable at a given time.
-9
rookie-mistake Mar 27, 2026 +8
>Right, without AI deepfake image generators suddenly bullying will vanish and all kids will get along instead This is such a bad argument, I'm sorry. - Nobody is saying it would fix all bullying. - Should murder not be illegal because banning it doesn't fix human mortality? Should we discard all measures that aren't panaceas? Like, you set up a strawman and then chose a line of argument that doesn't even carry water with the invented point you decided to attack lol I think it's perfectly reasonable to try to curtail the more problematic uses of generative AI. It's here to stay, and it has a ton of positive and helpful uses, but establishing the guardrails early and effectively is going to determine how harmful widespread adoption ends up being.
8
Zemledeliye Mar 27, 2026 -4
I don't think AI should be regulated at all beyond CSAM, I'm sick and tired of this babyfication and nanny state BS regulation of the internet and the EU proves once again it shouldn't exist 
-4
SotirisFr Mar 27, 2026 +5
Why regulate AI CSAM then, based on your logic? We absolutely should ban AI nudification in general, it's something that does not really benefit the world and has incredible potential for harm. And sure, if you think it's a nanny state because people can't sell services that make n*** pictures of people without their consent, I guess in this specific instance I'm pro nanny state.
5
Zemledeliye Mar 27, 2026
>Why regulate AI CSAM then, based on your logic? Because children are a protected class of society and because regular CSAM is illegal and AI CSAM would make police work harder.  >We absolutely should ban AI nudification in general, it's something that does not really benefit the world and has incredible potential for harm. And sure, if you think it's a nanny state because people can't sell services that make n*** pictures of people without their consent, I guess in this specific instance I'm pro nanny state. Disagree, and if your measure of something being legal or not is "benefit" then you are pro-nanny state in general, you would champion for the restriction of anything as long as someone made a good enough risk assessment analysis. As for consent, consent isn't a thing on the internet, you lose the right to privacy and ownership of your image the second you post it online. Why complain about just undressing? What about people downloading it? Showing it to their friends? Maybe you intended that bikini picture to only be for your OF subscribers but now it's everywhere? 
0
PM_ME_UR_CREDDITCARD Mar 28, 2026 +1
Scum like you are exactly the reason why it's needed. Someone posting a normal picture of themselves on the internet does not give scumbags the right to deepfake n**** of them. And what about pictures taken by other people and posted without the subject's consent? Should scum like you be allowed to take creep shots of random women in public and deepfake n**** of them? Because that shit is disgusting, creepy, morally and ethically wrong. Like you.
1
krneki534 Mar 27, 2026 -34
as a teenager, I was running half naked on the beach every summer What the hell are you kids traumatized this days and why it is you religion?
-34
Life-Aid-4626 Mar 27, 2026 +23
Half naked != full naked Consented half naked != unconsented full naked Are you truly that stupid?  How about you share pics of yourself. I'll generate you f****** your boss's partner, and send it to them. Will it get you fired? What if i generate you at a whorehouse and send it to your partner and family. How many of them will believe it and cut off contact?
23
krneki534 Mar 27, 2026 -16
I could not care less Also, chances are no one will replay for days, because no one is into phones.
-16
EvilLalafell42 Mar 27, 2026 +2
Classic plistnookor defending the creation of CP
2
SpecsMaker Mar 27, 2026 +5
Explicit apps can be banned easily. I think most popular AI companies will scrap photo and video generation and focus on enterprise use cases and agent workflows. Will reduce a lot more energy waste and allocate resources for better use. They can’t monitor local models running in private servers or homes but uncensored models will also face a road block or some sort of compliance checks. Whatever they do it would be very difficult to enforce it 100 percent.
5
TediousTotoro Mar 27, 2026 +6
Hopefully they don’t make an exemption for Grok like the UK did
6
unematti Mar 27, 2026 +2
Just how music should stay AI free, so should p*** and bullying.
2
reassor Mar 27, 2026 +2
Do they know it's not apps?
2
ElApple Mar 27, 2026 +4
I saw one of these ads on Instagram, it was so blatant. I reported it as sexual exploitation and Meta deemed it didn't breach their TOS. For 3 days straight it was the only ads I got. Companies advertising these tools should be held accountable
4
Slaaneshdog Mar 27, 2026 +3
good luck with that. People are gonna be able to do this in private on locally run LLM's There's already tons of open source ai generated p***
3
AsianButBig Mar 27, 2026 +2
Going after the low hanging fruit. If you don't do something about the models themselves, anyone can claim they are hosting a general purpose AI model that also happens to have a nudifier feature.
2
IndividualB00t Mar 27, 2026 +2
Will they ban X too as Grok is doing it regularly?
2
Viscious-viking Mar 27, 2026 +2
Too bad, so now when someone says there gonna release your n**** you can’t say it is AI
2
RexDraco Mar 27, 2026 +1
I for some reason thought it already was. 
1
jugalator Mar 27, 2026 +1
People who does this with services that have done their homework and block EU users can just use a VPN. I doubt operators of nudifier services will care more than that step though, because they want the subscription/AI credit money. I suppose it's more about signalling a message, not let this be a valid business model within EU, and not have straight up X allow EU users to do this stuff.
1
Logitech4873 Mar 27, 2026 -2
Sadly completely unenforceable.
-2
Abject_Breadfruit148 Mar 27, 2026 +1
Twitter won't stop until a fine actually gets put in place.
1
jakreth Mar 27, 2026
I just think there's no need to ban them because they will loose meaning on its own. People will get used to it and it will became like someone cutting the face of a photo and paste it in a n*** body.
0
The_Woven_One Mar 27, 2026 -11
Honest question. I am not trolling. Would someone please explain why it's so bad that there are fake pictures of naked people? It's not like they are actually pictures of the subjects.
-11
Devil-Hunter-Jax Mar 27, 2026 +11
It's creating non-consensual p**********. Simple as that. These people didn't consent to their head being put on a n*** body or having software generate an image/video of what it thinks they look like naked. Some of it can get extremely graphic and vile as well.
11
The_Woven_One Mar 27, 2026 -3
I'm thinking of Mad magazine, with the extremely over exaggerated caricatures of celebrities. Why does it matter if it's sex feel-good chemicals instead of humor feel-good chemicals?.
-3
CoffeeSubstantial851 Mar 27, 2026 +5
Scale and ease of access. Teenagers are taking their classmates photos off of social media and making AI p*** of them. Guess who else is doing it? Predators taking childrens/teenagers photos and feeding them into nudify apps.
5
Zemledeliye Mar 27, 2026 -2
>It's creating non-consensual p**********. Simple as that. These people didn't consent to their head being put on a n*** body or having software generate an image/video of what it thinks they look like naked. You can't demand privacy and then post your picture on the internet, if you don't want people doing anything with your image don't post it online, simple as, we shouldn't have nanny state laws to cater to peoples laziness or idiocy
-2
Devil-Hunter-Jax Mar 27, 2026 +4
Leave it to Listnook to come up with excuses for why non-consensual p********** is actually ok.
4
Zemledeliye Mar 27, 2026 -2
Muh consent.  People like you is why we are going to end up with mass surveliance states and why we are going to lose the free and open internet before the end of the decade.  Yeah let's ban all image editing software because you are unable to accept the internet isn't your private playground 
-2
Silvere01 Mar 27, 2026
I want you to take a deep breath and actually think for a second, and realize that you are blaming the guy not wanting non-consent p********** of teens, instead of all the people that use said AI because they actually want that non-consent p**********.
0
RM_r_us Mar 27, 2026 +8
Seriously dude? How about you send me a photo, I de-nudify you, then share it on a platform for anyone to see and share. Eventually someone who knows you recognizes you, maybe your boss or coworker. You think you'll just have a laugh and get over it?
8
blksilksheetz Mar 27, 2026 +10
and think of some of the countries/cultures that this could be life or death for. india still has honor killings. not to single them out, i just have indian friends so i can easily imagine a situation where this means someone dies.
10
bandsam Mar 27, 2026
As long as you accurately represent my 11 inch hog
0
PM_ME_UR_CREDDITCARD Mar 28, 2026 +1
Anyone saying that on listnook is 100% guaranteed to be closer to 0.11
1
Iwontdobetter Mar 27, 2026 +4
Because people won't know it's fake and will assume it's an actual naked picture of that person. I think one possible solution is that every browser should have the built-in capability to detect if a picture is AI generated or enhanced, and automatically label it as such to the user.
4
The_Woven_One Mar 27, 2026 -5
"hey, that's not me" Boom.
-5
Heavy_Milk2757 Mar 27, 2026
Elon "pedo guy" Musk.
0
[deleted] Mar 27, 2026 -15
[removed]
-15
Same_Win_5898 Mar 27, 2026
Terrorism is not the answer.
0
[deleted] Mar 27, 2026 -6
[removed]
-6
← Back to Board