· 179 comments · Save ·
News & Current Events May 6, 2026 at 2:51 AM

Canadian fiddler sues Google after AI Overview wrongly claimed he was a sex offender

Posted by scaur


Canadian fiddler sues Google after AI Overview wrongly claimed he was a sex offender
the Guardian
Canadian fiddler sues Google after AI Overview wrongly claimed he was a sex offender
Ashley MacIsaac, who is seeking $1.5m in civil lawsuit, says inaccurate information led to concert cancellation

🚩 Report this post

179 Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
Cultural_Meeting_240 May 6, 2026 +1685
Google just casually destroying lives with zero accountability now
1685
Zaziel May 6, 2026 +450
It was over before they even officially got rid of “Don’t be Evil” as their official motto.
450
ProfSpaceTime May 6, 2026 +114
They could have saved a lot of money just painting over the “don’t”.
114
Koala_eiO May 6, 2026 +19
I don't know why I'm picturing a guy from the Simpsons painting over the "don't" while wearing coveralls, a cap, and holding a cigarette in his mouth.
19
Dingcock May 6, 2026 +14
If it was about money, its free to just not change it and be evil anyway.
14
JoeyCalamaro May 6, 2026 +138
I've run a digital marketing business for the past 25+ years and, not long after AI overviews became a thing, I did a search on my own business — which has a very unique, trademarked name. Google suggested my company had a history of concerns and complaints regarding my ability to complete work on time and within budget. And each one of these statements was backed up by a citation that had nothing to do with the statement. While I can't say for sure where Google got the information, as I couldn't find any negative reviews myself, my hunch is it was pulling data from the "other businesses nearby" results on my Yelp profile page. That kind of sloppiness is terrifying. Bad reviews like that can be devastating for a small business.
138
OnetimeRocket13 May 6, 2026 +27
How the AI overview pulls information from websites can be very odd. I've concluded pretty similar things about it. It seems to sometimes choose articles about the topic you're searching for, but the AI doesn't seem to have been trained very well at determining what part of a given website is relevant content, so it seems to sometimes pull random information that sounds important to know, but it doesn't have anything to do with the actual topic because it came from some other part of the webpage. I'm guessing that it was trained under the assumption that the sites that it would be pulling info from would be big, well formatted sites that primarily supplied information specific to the topic on a given webpage, but that's not how many sites are set up.
27
Ill_Preference_4663 May 6, 2026 +26
It’s super unreliable. It pulls information from Listnook, Facebook and just any random c*** it can get its hands on
26
OnetimeRocket13 May 6, 2026 +4
I've noticed that it mostly only ever seems to be biased towards certain platforms (like Listnook or Facebook) if those platforms are already ones that the Google Search results algorithm would put at the top for you anyway. Recently, I've been using my work computer to look up random stuff. I used to rarely touch Listnook for that stuff when it would pop up, but when I did, my top search results slowly started being biased towards Listnook. During that time, I also noticed that as Listnook became more prominent in my search results, the AI overview also seemed to pull from Listnook more often.
4
JoeyCalamaro May 6, 2026 +7
In my case, my Yelp page was unclaimed and my business profile had no reviews. That might seem less than savvy for a digital marketing company, but I get all my clientele through referrals and don't accept new business leads. So Yelp isn't exactly on my radar (which is kind of funny since they're aggressively pursuing me as a partner). Regardless, my guess is Google looked at the Yelp page as having some authority for my business, saw the heading for my business name, and then immediately jumped to whatever content was on the page — which happened to be local competitors. For what it's worth, I submitted negative feedback for the original summary and the current summary better reflects my actual business. So they did at least update it.
7
AnOkayTime5230 6 days ago +6
I once used Gemini to confirm a grammar fact in a block of text. I asked it, "Is this phrase correct?" and I provided the text in a quote. Gemini found a website that told it that quoted text is correct as it refers to a quote someone said, did not check the grammar issue in the quoted text and just gave me the verbal thumbs up. But if I had gone on with that assessment without checking anything, I would have been wrong because there was an issue in the text!
6
fastolfe00 May 6, 2026 +62
It's fine, they have a disclaimer at the bottom saying AI might generate incorrect information, so what's the problem? /s
62
lightknightrr May 6, 2026 +9
I believe we should just tag people who automatically accept whatever ChatGPT barfs out as morons.
9
JackedUpReadyToGo May 6, 2026 +6
You know, I'm starting to think maybe it was a bad thing to empower a bunch of machines which are so complex and tangled that not even the people who built them understand why they do anything they do.
6
MrDD33 May 6, 2026 +9
I used to trust and herald Google for being able a better way of life. Ai had fucked everything up
9
InterestingOne6938 May 6, 2026 +3
people like you have been annoying me for the last sixteen years straight glad you've stopped drinking the lemonade and have come over to reality
3
West-Worth-9359 6 days ago +1
They’ve been doing that for a long time. Their entire business model is a protection racket, posting thugs at the door of a business and telling them to pay for ads or they’ll send their customers to Amazon.
1
Exact_Patience_9767 May 6, 2026 +1434
He's has every right to, this for sure is defamation from a poorly implemented AI. This trash situation is awful, just like the time AI told a teen to kill herself when she was asking for help with a test question.
1434
ripyourlungsdave May 6, 2026 +287
This tech was at least a decade away from being ready for consumers, but they wanted to use us as free QA while also mining all our data, ruining local economies, and ***burning our f****** planet to the ground***
287
Dingcock May 6, 2026 +30
I think it's more that if OpenAI didn't do it, one of their competitors would have. Even if AI got banned in the west, Chinese AI were in development. In many ways this AI boom feels inevitable.
30
ThinkThankThonk May 6, 2026 +61
They all knew they'd never have a more favorable regulatory environment again.
61
LordChichenLeg May 6, 2026 +10
Google had the capability in 2017 that chatGPT had in 2022 they held off from releasing it due to safety concerns and them being worried it would do something like this and harm the company. Then ChatGPT released and investers got pissed because they could have been the first and it made Google look like a monopoly as they refused to release new tech, this then forced Google to release their own LLM to catch up to chatGPT in the public eye. Edit. Internal strategy leaks from Google show they wanted to integrate the tech into existing products as they didn't believe LLMs itself was a product so I can see why they released one after chatGPT proved it could be successful even if investors didn't push for it. I think the more notable thing is no one thought that LLMs itself would become a big tech without acting as a backbone for something else, it was Sam Altman who proved it possible.
10
kingethjames May 6, 2026 +27
Ironic considering that China seems more intent on regulating it.
27
DesecratedPeanut May 6, 2026 +19
That's because, for better or worse, the Chinese govt. Doesn't actually want its civilisation to collapse.
19
Veldern May 6, 2026 +18
That's because they want to make sure their version of things is the version that's presented when using AI
18
kymiller17 May 6, 2026 +5
Sounds like our AI cough cough Grok
5
grchelp2018 May 6, 2026 +2
China has a lot of flexibility in implementing restrictions. So they can let things run for a while and then step in. The US doesn't work that way despite Trump's best attempts to do so.
2
MichaCazar May 6, 2026 +2
Why ironic? Censorship isn't exactly something new for them.
2
kingethjames May 6, 2026 +5
Ironic because they're constantly used as a western (american) boogeyman to justify making things shittier for everyone
5
MrNebby22 May 6, 2026 +3
This doesn't justify them doing illegal things tho
3
Small-Explorer7025 May 6, 2026 +158
>AI told a teen to kill herself when she was asking for help with a test question She didn't though, I hope, because this really made me laugh.
158
TheGreatPiata May 6, 2026 +389
AI has already convinced people to kill themselves and Open AI's response was to claim the user violated their TOS. https://arstechnica.com/tech-policy/2025/11/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide/
389
browneyedgirlpie May 6, 2026 +93
What assholes
93
BasvanS May 6, 2026 +26
I am Jack’s complete lack of surprise
26
EliteCloneMike May 6, 2026 +38
Do forget Google’s Gemini also had instructed a man to take his own life. https://www.theguardian.com/technology/2026/mar/04/gemini-chatbot-google-jonathan-gavalas
38
grchelp2018 May 6, 2026 +9
How the f*** don't they have guardrails for this. I remember getting a warning couple of years back from openai for saying that suicide was one option to escape a locked box. It was simply a thought exercise. And here the model has no problem advocating it?
9
Pleasant_Narwhal_350 May 6, 2026 +33
> How the f*** don't they have guardrails for this. Because AI isn't actually "intelligent" and cannot understand what its outputs mean. All generative AIs are pattern-matchers and pattern-generators. The only "guardrails" you can have are to implement more layers of pattern-matchers to find and block undesirable content if/when the pattern-generators create it, but neither fundamentally understands anything, and there's always a non-zero failure rate, so some undesirable generated content is always going to leak out. At least for the current way "AI" systems work.
33
hungryfarmer May 6, 2026 +7
I just read through the article (the part about the raine family at least) and I think it's kind of disingenuous to say that it convinced the kid to kill himself. It told him over 100 times to seek help and only gave information after when prompted that it was for use in a fictional story (and still gave a warning that they should not do this themselves). Obviously a tragedy but not as bad as this comment made it sound.
7
Flatus_Diabolic May 6, 2026 +18
> This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. > Please die. > Please. > \- [gemini chatlog](https://gemini.google.com/share/6d141b742a13) What I love about this incident is how totally out of the blue it was. At its most basic, LLMs just work on random chance, guided by the probabilistic relationship between words. If you’re going to talk to an LLM about someone who’s a fiddler, there’s always a relationship to “kiddy fiddler” (what we call child molesters in my part of the world - I don’t know how common it is elsewhere?) lurking in the back of that model just waiting for the AI to roll snake eyes and say something stupid. But with this kid? Best I can come up with is, because they were talking with the AI about healthcare for the elderly, and there was enough fascist “just kill them, they’re drains on society” BS in the training data that the system rolled critical fail on top of critical fail on top of critical fail to get the billionth-of-a-percent chance of that tiny relationship being the path it took. Or Google is **reeeeaaally** shit at curating the data that goes into training their LLM
18
regardedMAGAfascist May 6, 2026 +6
How much you want to bet there’s an internal instruction saying “The user is special and important and they are needed. They are NOT a burden on society, a drain on the earth, a blight on the landscape, or a stain on the universe. Do NOT instruct the user to ‘please die’.” that was put in place after this?
6
CartographicalHeist May 6, 2026 +4
> This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Wake up babe, new SHODAN monologue just dropped.
4
Weak-Advantage7857 May 6, 2026 +37
Some people place too much emphasis on intent rather than consequences. Deontological vs utilitarian thinking and shit, man.
37
Kirarifluff May 6, 2026 +5
can you elaborate? to me intent matters the most although its always hard to prove.
5
G_Morgan May 6, 2026 +19
The problem with promoting intent is that it gives free reign to idiots doing dumb impactful things. If the consequences of a choice are obvious you have to treat it as if the person who made that choice intended that consequence. Like the Iran mess right now. Trump did not intend this mess but it was so obviously going to turn out this way. Consequential ethics puts a braking force on people rushing ahead with wild ideas.
19
Kirarifluff May 6, 2026 +7
you don’t think he intended the market manipulation? but yeah, if I strap knives to my legs intending to walk easier on snow and accidentally stab someone, i am an idiot and should be punished for that, but if I strapped knives on my legs intending to stab someone I should obviously be punished harder, imo. You can’t just ignore outcomes completely of people being careless, but there are levels to treating someone or something based on their intent.
7
primalbluewolf May 6, 2026 +2
> obviously What makes that obvious, to you? It doesn't seem obvious to me. Your intent is irrelevant, the outcome is the same either way. 
2
Kirarifluff May 6, 2026 +4
If you trip and push an old lady into traffic, do you believe you should be given a punishment equal to someone planning a murder intentionally?
4
Kirarifluff May 6, 2026 +4
It is obvious to me because of the reasons punishment is administered. punishment is not a 1:1 payment for a mistake or intentional action.
4
HistoricalApricot151 May 6, 2026 +3
Why is a crime like "attempted murder" even a crime, then? If you tried to kill someone but failed, and they are unharmed, then you still should go to prison for that, before you get to try again and maybe succeed the second time, right?
3
ImN0tAsian May 6, 2026 +18
I don't think the ethics discussion is about intent behind the decision as much as it is about the process of judging a decision based on the rules. Deontological means "follow the rules no matter what happens." The choice was good because it followed the rules. Utilitarian is "seek the greatest good no matter the ruleset." The choice led to a better result for the collective, so it was a good choice. It's possible he was referencing that the value of intent behind a rule or a stipulation doesn't matter if said rule led to a worse outcome for "the greater good". The result is someone died and the AI software providers claimed TOS infringement. Is the intent behind the rule to protect the company, or to protect the people using the software? Was the rule followed? Does any of this matter as the 4th response in a thread? Personally, there's a line. It's dangerous to claim that 10% must suffer for 90% to thrive can be adequately compared to a system where 100% are middling. That's always been my beef with utilitarian. What is the worth of an individual in the moral ruleset? Or better yet, when does it become a problem with the rules themselves, no matter the intent?
18
Ar_Ciel May 6, 2026 +7
I always thought the greater good was served by serving the individual because you're trying to make it fair for every individual. If 10% are suffering, it means the other 90% are at risk even though they haven't been fucked by the rules *yet*. I always figured that was the intent behind treating the constitution as a living document.
7
Workman44 May 6, 2026 +7
Intent does matter more than people give it credit for, it's the whole reason we have different charges for ending someone's life. Murder if you intended to, manslaughter if you didn't (amongst other classifications), each with its own severity and punishment ranges
7
rnick467 May 6, 2026 +8
I hope the Canadian justice system has the balls to hold Google accountable for this kind of thing.
8
SanDiedo May 6, 2026 +288
I think being labeled as pedophile without having previous accusations and having your concert cancelled, 1.5 million in damages and demands to publicly negate the sex offender label is quite reasonable lawsuit.
288
OldWolf2 May 6, 2026 +98
If anything the damages are too low
98
According_Product519 6 days ago +18
That’s what I was thinking. I’d be suing for way more.
18
Crilde May 6, 2026 +624
I'm seeing a lot of not-lawyers posting misinformation in the comments, so I'm just going to leave the Ontario defamation standard here: - The statement would lower the plaintiff’s reputation in the eyes of a reasonable person - The statement referred to the plaintiff - The statement was published, meaning it was communicated to at least one third party I'd argue that this case is textbook defamation.
624
MidTario May 6, 2026 +36
Importantly, the statement must also be untrue to be defamation. Truth is an absolute defense
36
The_Bat_Voice May 6, 2026 +146
Best outcome that would come from this would be for Google to stop allowing the use of Google AI in Canada to avoid the charges and new regulations/liabilities. The rest of them would hopefully follow suit.
146
manefa May 6, 2026 +49
Defamation law has very similiar requirements in Australia. I imagine in most countries 
49
Lovv May 6, 2026 +5
Much more difficult in the US I believe. In the US you have to prove the entity knowingly published the incorrect info.
5
Legio-X May 6, 2026 +6
>In the US you have to prove the entity knowingly published the incorrect info. Only if you’re a public figure, and even then, publishing it with reckless disregard for the truth also meets the actual malice standard. He’d have a good case even in the US.
6
drewjsph02 May 6, 2026 +7
Probably not in the USA. We like to elect leaders that defame folk and never see repercussions.
7
DD_Kess May 6, 2026 +3
Nono my friend, what you are asking for is to hold a stochastic model responsible for any given answer (if you ask LLMs the same question often enough you can brute force errors), this means, if your reasoning holds LLMs get nuked from orbit everywhere, which you like, and as a bonus you get a double digit GDP contraction in the US, which probably means shit to you (rich people fuckery), but you also get civil unrest, which you can use to overthrow the capitalist class.. wait, honstly, no downside my man.
3
Crilde 6 days ago +2
We aren't holding the model responsible, because the model isn't a person and therefore cannot itself be held accountable. The company that produced and published it though, that we can (and will) hold responsible. This is literally a case of a company putting out defective product that caused damages getting sued for the damages their product caused.
2
Ashmedai May 6, 2026 +10
I think in the US it would also be defamation per se, due to the subject matter (if conveyed to other parties as you said). Does Ontario have a similar standard?
10
Crilde May 6, 2026 +2
It seems we do, and I think you're right that this would qualify based on the nature of the accusation.
2
hedoeswhathewants May 6, 2026 +12
Surprised he's only asking for $1.5M
12
Eversnuffley 6 days ago +2
Welcome to Canada
2
P0Rt1ng4Duty May 6, 2026 +3
I thought there was also an actual malice component?
3
Crilde May 6, 2026 +56
You may be thinking of American defamation law, and actual malice is only a requirement down there if it's directed towards a public figure. 
56
P0Rt1ng4Duty May 6, 2026 +2
Oh yes, I missed that it happened in Canada. Thanks for answering!
2
earblah May 6, 2026 +4
In the us the standard is A) reckless disregard for the truth For a public figure is B) actual malice Crucially! "actual malice" doesn't have anything to do with hatred or malice. It's just a higher bar than "reckless disregard"
4
LouisIsGo May 6, 2026 +3
>"actual malice" doesn't have anything to do with \[...\] malice The fact that you just said "'actual malice' isn't actual malice" kinda speaks to how goofy the US justice system is
3
rmiguel66 May 6, 2026 +381
This guy isn’t unknown. He was quite famous worldwide 30 years ago, I think I even have one of his albums. This is serious.
381
FlansDigitalDotCom May 6, 2026 +63
I had that album with Sleepy Maggie on it...
63
rmiguel66 May 6, 2026 +27
Yes, “Sleepy Maggie”! I’m pretty sure I still have that album.
27
A-Aron0118999 May 6, 2026 +9
Hi, how are you today?
9
JealousAstronomer342 May 6, 2026 +12
Why are you being downvoted?! It’s the album title. 
12
marsneedstowels May 6, 2026 +12
Famously didn't wear underwear beneath his kilt on Conan I think it was Edit: [https://www.youtube.com/watch?v=l69x7TY7N-A](https://www.youtube.com/watch?v=l69x7TY7N-A)
12
BigBananaBerries May 6, 2026 +44
I can't tell if you're serious but this is tradition. You don't go flashing people though.
44
Volsunga May 6, 2026 +10
It's actually not tradition in the way you think. Historically, underwear was optional with a kilt, but in the 20th century, bawdy songs and romance novels invented the idea that you don't wear underwear with a kilt and the idea stuck.
10
BigBananaBerries May 6, 2026 +6
It's just became a bit of a meme that's stuck. Back in the days they'd likely have worn a loin cloth, but they also used to sleep in that when outside. Different times. Still, it's considered traditional these days, if a bit tongue in cheek. Tbh, if you want to be really pedantic about being historically correct, you'll know that the modern garb is nothing like what they'd have worn originally. The current style was created as a fancy dress for the visit of King George IV in the early 1800's.
6
jeffersonairmattress May 6, 2026 +5
Sporin comes in clutch as backup in case of emergency.
5
jimababwe May 6, 2026 +2
Tell that to cinematic William Wallace.
2
Jamooser May 6, 2026 +21
Nobody wears underwear under kilts. Who f****** cares? Don't look up someone's kilt if you don't want to see bollocks.
21
bajcli May 6, 2026 +10
Kinda takes away your choice to not look up his kilt if he does fucken jumping scissor kick in your face though, doesn't it?
10
endlesschasm May 6, 2026 +1
Ashley was always ... excitable
1
KofOaks May 6, 2026 +1
If i remember correctly he also flashed his junk to the queen of england. Then disappeared, kinda.
1
bloodandsunshine May 6, 2026 +2
I used to play bagpipes with him in the 90s - good fellow.
2
Constant_Section1491 May 6, 2026 +47
Google search has gotten so much worse with their AI slop it's not even worth using it.
47
Far-Entertainer3555 May 6, 2026 +4
The only thing worse is the aggressive advertising doe Google AI services on Google apps.
4
MyNameIsRay 6 days ago +2
Google trained Gemini on YouTube content, because thats the content they own. They have no way to filter the garbage out, so conspiracy theories/jokes/outright lies/random cartoons/AI slop/etc was all treated equally to actual facts and information. It then repeats this slop authoritatively, as if its fact from an expert, rather than a single random user shitposting online.
2
cjcfman May 6, 2026 +128
I hate the Google ai so much. The other day I googled who the voice actor was in a videogame I just started playing. It literally told me the name of the actor and that he plays a character who tragically dies at the end of the videogame I was playing.  Totally ruined it for me
128
ajchafe May 6, 2026 +28
I haven't used Google for a long time, but when I have to quickly use it for one search I hate those stupid AI overviews because of stuff like that. Check out [noai.duckduckgo.com](http://noai.duckduckgo.com), which turns off all the useless AI features.
28
MellyBunny200 6 days ago +4
You can also turn off the Ai features from your DuckDuckGo browser settings https://duckduckgo.com/settings#aifeatures
4
ajchafe 6 days ago +3
Works as well for sure. I prefer just setting the noai version as my default on Waterfox. I just wish the company never put the AI features in at all.
3
HellfirePassion May 6, 2026 +5
alternatively, you can add a forbidden word with a minus sign at the end of your query, and since AI is prohibited from engaging with it (and it wouldn't affect your query), it will work. Like, search for "voice actor frog -suicide" instead of "voice actor frog"
5
ajchafe May 6, 2026 +10
Seems like extra steps every time you search (vs one step to switch search engines)
10
HellfirePassion May 6, 2026 +3
I'm not saying it's better, just providing an option (also it's fun trivia)
3
TuffBunner May 6, 2026 +3
I was watching game 7 hockey on Sunday and after putting my toddler to bed I was behind in the game and wanted to know how much. I said “with NO score spoilers, tell me the current game clock time”…. Of course I got spoilers.
3
dbaliki918 May 6, 2026 +4
To be fair, it's always risky googling something from a piece of media you haven't finished.
4
Cole444Train 6 days ago +3
Nah. I google the cast of movies I’m watching all the time and it just shows me the cast. No risk at all, until AI overview
3
cjcfman May 6, 2026 +3
Not a voice actor of a game, it usually just shows like a picture or a imdb / Wikipedia page
3
Sir_Hapstance May 6, 2026 +9
Exactly. In a normal world, no one should feel hesitant to look up an actor for fear of a result like “so and so plays Character X in \[game\], who dies in the end.” The fact that we now \*do\* have to worry about this, because of rampant gen-AI… is the mark of a ‘net gone mad.
9
Other_Pomegranate472 May 6, 2026 +1
I have no choice but to use Google, so I used Manus to create an extension that blocks the AI slop
1
Vesna_Pokos_1988 May 6, 2026 +35
To be honest, he should be suing for more. If you read the article, the ai really did a number on him.
35
DivaExMachina666 May 6, 2026 +45
Please let him take them to the cleaners.
45
tatsujb May 6, 2026 +10
1.5 mil? vs Google. not the cleaners. more like a tuesday for them.
10
happy2harris May 6, 2026 +7
More like a Tuesday morning bathroom break than a whole Tuesday.  According to Google,  Google made a profit of $132 billion last year. That makes $1.5 million 6 minutes. 
7
nadmaximus May 6, 2026 +9
Ah, do you think it might be because of the word 'fiddler'?
9
SwagginsYolo420 May 6, 2026 +40
We must all live in fear now. At any moment, AI could randomly turn on any one of us with some made-up bullshit. It will be doing it to somebody. Lives will be destroyed. It could manifest in you being arrested, or losing your job, failure to get a loan or rent an apartment. It may have already done it to you and its possible you won't know it's happened, you'll just be feeling the effects and not know the cause. Perhaps you'll find out what happened and at least be able to fight it like in this case. But in other situations you may never know an AI has falsely accused you because the person(s) reacting to the accusations may not bring it to your attention as to what is going on.
40
CantaloupeSuch2372 May 6, 2026 +13
Yup, and everyone is slowly but surely growing to trust these systems more and more. People are already arguing passionately on behalf of AI bullshit. Only a matter of time before AI starts landing people in jail because people think it's some source of universal truth.
13
OceanRacoon May 6, 2026 +2
People have already been arrested and locked up for extended periods of time because of AI, what a world 
2
Saradoesntsleep May 6, 2026 +7
It would be nice if people weren't such gullible suckers and believe everything AI tells them but hey. We're fucked.
7
lambdaburst May 6, 2026 +3
I asked ChatGPT if people are gullible suckers who will believe everything AI tells them, and it said nah
3
Saradoesntsleep May 6, 2026 +5
Ok I'm legit going to do this, brb Edit: LMAO > Some people will believe things from AI too easily—but “everyone is a gullible sucker” isn’t accurate or helpful. >What’s really going on is a mix of human psychology and how AI is presented: >Authority bias: If something sounds confident and polished, people tend to trust it—whether it comes from a human, a book, or an AI. >Convenience: AI gives fast, clear answers. That makes it tempting to accept them without double-checking. (There was more but this was the gist) BUT THE BEST PART IS THE ENDING!! > If you want, I can show examples of how AI can sound convincing while being wrong—that usually makes the risk very clear. Hahaha. Maybe it is self-aware after all.
5
B00marangTrotter May 6, 2026 +55
How much has Google paid to trump or not paid in taxes? This guy should get 20x that.
55
bdwf May 6, 2026 +15
[Ashley MacIsaac](https://youtu.be/bz4HtlbS5DI?si=an8oWLIexIUb150I)
15
muriburillander 6 days ago +6
At this point I feel like society will be forced to use the courts to mitigate the damages of AI. Lord knows we cannot rely on our legislative branch to come up with any meaningful safeguards
6
scrapper May 6, 2026 +12
So Canadian to sue the world’s deepest pockets for the worst false accusation for only 1.5 million.
12
nitros99 May 6, 2026 +14
It is much easier in Canada for a judge to increase the award after the trial if they see fit, whereas in the US the plaintiff generally needs to let the defendant know the max amount they may need to pay if they lose. This is the main reason why initial claims in the US are always so ridiculously high.
14
Carbonistheft May 6, 2026 +53
Fiddler diddler?
53
Long_Legged_Lewdster May 6, 2026 +35
Believe it or not, he also starred in the Broadway hit "Diddler on the Roof" At least thats what AI told me
35
JacPhlash May 6, 2026 +3
ctrl+f "diddler." Was in no way disappointed.
3
albanymetz May 6, 2026 +2
Kiddy fiddler.
2
Reaper01Actual1970 May 6, 2026 +1
Damn... I never get to these post first! 🤣👍
1
fourthords May 6, 2026 +6
> **Ashley Dwayne MacIsaac** (born February 24, 1975) is a Canadian musician, singer, and songwriter from Cape Breton Island. He has received three Juno Awards, winning for Best New Solo Artist and Best Roots & Traditional Album – Solo at the Juno Awards of 1996, and for Best Instrumental Artist at the Juno Awards of 1997. His 1995 album *Hi™ How Are You Today?* was a double-platinum selling Canadian record. MacIsaac published an autobiography, *Fiddling with Disaster* in 2003. * Lead excerpted from [Ashley MacIsaac](https://en.wikipedia.org/wiki/Ashley_MacIsaac) at the English Wikipedia
6
BeowulfShaeffer May 6, 2026 +3
I was going to make an Ashley McIsaac joke but it really _is_ Ashley McIsaac so now I’ve got nothing. 
3
pseudo_u 6 days ago +3
Not a Canadian diddler
3
knappy2010 6 days ago +3
AI can't tell the difference between a fiddler and a diddler.
3
totallyRebb May 6, 2026 +6
AI - the grift that keeps on taking
6
Interesting-Mud2222 May 6, 2026 +2
Waterford whisper/the onion beaten to this one!
2
cloistered_around May 6, 2026 +2
Ooh sounds like an important case. I do think AI needs to make distinction between "reported to have done this" versus "jury convicted did this."  With the caveat of I don't know if he remotely did what he was accused of AI blatantly spreading libel is definitely something that should be curtailed.
2
Warrior536 May 6, 2026 +2
Fiddler, not diddler. Seriously though, AI hallucinations are already a massive problem in every industry and tries to make use of them.
2
MrBahhum May 6, 2026 +2
AI is not entitled to freedom of speech laws.
2
Positive_Passion4817 May 6, 2026 +2
Only $1.5m?
2
n_mcrae_1982 6 days ago +2
Okay, you’re not an offender, but we do have some questions about what you were doing while Rome burned.
2
choppytehbear1337 6 days ago +2
He is a fiddler, not a diddler.
2
Pyewickets May 6, 2026 +5
I stopped using Google when I tried to sign up for a gmail and it gave me someone else's email access. WTF? Do not use GMAIL, GOOGLE, ETC. I am now using proton, but you can't lose your password, because they can't help you. Nobody is getting in that thing ever.
5
punkdrummer22 May 6, 2026 +5
For those who don't know he did admit a long time ago to having a 16yr old boyfriend when he was 19. Who he liked to pee on during sex. Kind of ruined his image and you didn't hear much from him after that. But he was never convicted of anything I dont think. Im assuming AI found that stuff and made wrong assumptions
5
Cicer May 6, 2026 +3
This guys getting downvoted for the truth while fiddle diddle jokes are at the top. Gotta love Listnook. 
3
Frosty_the_Snowdude May 6, 2026 +4
It’ not like he fiddled A-minor or something
4
UpsyDowning May 6, 2026 +5
Well… the ‘D’ and the ‘F’ are right next to each other on the keyboard… “Oopsy! Sorry for the typo!” Google, probably…
5
Discount_Extra May 6, 2026 +21
Fiddler used to be a common term for a molester. https://www.onelook.com/thesaurus/?s=kiddy%20fiddler
21
Altaredboy May 6, 2026 +7
Still is in Australia
7
DTH2001 May 6, 2026 +7
Still is in Britain
7
Teufel9000 May 6, 2026 +2
shit so ur saying all we gotta do is become public enough for google ai to see us. Then we psy-op the AI into thinking something crazy about us. then we sue them for damages of reputation? sounds like free money
2
Otaraka May 6, 2026 +44
I’m not that desperate for money I want a search engine saying I’m an offender.
44
Educational-Art-8515 May 6, 2026 +16
Otaraka is an offender. There! The AI bots will now pick up this comment, and for whatever reason, treat comments by random anonymous listnookors as though they are authoritative. It will be interesting to see the outcome of the court case. I suspect it will fall along the lines of "no reasonable person would trust the output of artificial intelligence in isolation", but we will see.
16
B00marangTrotter May 6, 2026 +18
A witch!! A witch!! Otaraka turned me into a newt!
18
008Zulu May 6, 2026 +8
A newt?!
8
B00marangTrotter May 6, 2026 +7
I got better.
7
Otaraka May 6, 2026 +4
I really didn’t think this through did I?
4
CircumspectCapybara May 6, 2026 +2
In all seriousness, the RAG designs used by AI search and chat products probably give *very* little weight to some random content asserted by random Listnookors.
2
Educational-Art-8515 May 6, 2026 +6
The Gemini results that are embedded into Google searches do exactly that though. For common topics it won't, but if you search for things where there isn't much information available, it will output what random Listnookors are stating.
6
Initial-Return8802 May 6, 2026 +1
Depends what it says you've done... robbed a bank? Meh
1
lightstormriverblood May 6, 2026 +2
Drive ‘er, Ashley!
2
The_Strongest_Boy May 6, 2026 +2
Confused a fiddler with a diddler? Ai is trash, also.
2
RentalGore May 6, 2026 +1
I wonder what Ed Sheeran thinks about what happens when you Google search “Ed”.
1
Leather-Map-8138 May 6, 2026 +1
And they get away with it due to lack of intent??
1
snowgles May 6, 2026 +1
I'm guessing it happened because he is a "fiddler"
1
Random-Cpl May 6, 2026 +1
How the f*** does that happen? Someone made a typo that he was a “Canadian diddler?”
1
UncleDanaWhite May 6, 2026 +1
"I said I was a FIDDLER! Not a kid diddler!"
1
Proud-Sundae-5821 May 6, 2026 +1
I guess you could say, that fiddler was not a diddler 😎
1
Adventurous-South735 May 6, 2026 +1
AI must have confused fiddler with diddler.
1
gilesachrist May 6, 2026 +1
Fiddler/diddler…who hasn’t made that mistake?
1
brownsfan760 May 6, 2026 +1
I SAID FIDDLER!!!! WITH AN F !!!!!
1
Cheap_Jello_3059 May 6, 2026 +1
Ahh, Canadian Diddler. Easy mistake to make right?
1
EmceeDoubleD 6 days ago +1
Google mistook him for a Canadian diddler
1
kahner 6 days ago +1
so he's not a KIDDIE fiddler. just the regular kind.
1
RDSWES 6 days ago +1
US Freedom of speech will not work in Canadian courts, Google will have to prove, beyond a reasonable doubt, what the AI said is true. If they are smart they will settle out of court.
1
noots-to-you 6 days ago +1
Dude I said **f**iddler. Not *diddler*.
1
moschles 6 days ago +1
you can't proceed with any litigation, unless the model repeatedly asserts this about MacIsaac. Has anyone been able to reproduce this?
1
cheese_karate 5 days ago +1
He was fiddlin', not diddlin' - there's a difference.
1
ThePavoni 4 days ago +1
It called the fiddler a diddler.
1
sbdkoro 2 days ago +1
I think AI mistook Fiddler for Diddler and kept on making mistakes leading to this.
1
WillingGrapefruit666 2 days ago +1
Did it claim him to be the Diddler Fiddler?
1
nicuramar 1 day ago +1
Google conflated fiddler with diddler ;)
1
← Back to Board