· 60 comments · Save ·
News & Current Events Apr 11, 2026 at 1:37 PM

Banks Are Warned About Anthropic’s New, Powerful A.I. Technology

Posted by Loose_General4018



🚩 Report this post

60 Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
GlacialCycles 2 days ago +70
Nah. The real cyber threats to banks will come in the form of vibe coded shitty insecure code.
70
Starter-for-Ten 1 day ago +16
Or a french style revolution 
16
forksofpower 1 day ago +6
pourquoi pas les deux?
6
thatsme55ed 1 day ago +10
The backend infrastructure that banks use for actual sensitive information and transactions is regulated and regularly tested for insurance/liability purposes.   The stuff mythos is supposedly capable of is finding weaknesses in even the supposedly secure, well tested and well designed systems that have been in use for decades (and had decades of testing)
10
LongLongMan_TM 1 day ago +4
The code in bank systems is not the same as your friend greg's vibe coded new mulit million dollar app. Believe it or not, real software engineers can harness AI without producing trash. But of course, listnook doesn't wanna hear this.
4
Cornc0blin 1 day ago +1
Maybe it had a tough upbringing 
1
_juan_carlos_ 2 days ago +199
all this fear inducing marketing is getting old already.
199
0x476c6f776965 2 days ago +75
Lol Anthropic is so good at that BS. Like their new Project Mythos, “super strong cybersecurity AI that can cause zero-day attacks on all humanity’s digital infrastructure, which is why we can’t release it 😱” It’s so dumb, yet investors love it.
75
neuralbeans 2 days ago +41
Never forget that Elon Musk was the first to do it with GPT3, saying that they will not release it because it could generate perfect fake news. Turns out they just didn't want to give it out for free.
41
demonwing 2 days ago +15
Isn't AI-generated misinformation one of the biggest issues on the internet right now? I wouldn't call that prediction of danger wrong at all.
15
neuralbeans 2 days ago +4
Sure, but if they were really concerned then they wouldn't have just released it a bit later as closed source. They were just creating hype for when they commercialised it later.
4
Colon 1 day ago +3
this is completely discounting part of how the advance of technology marches. if you don’t do it, someone else will. and they’ll make all the money you would have made.  icky? yeah sure. but literally: the tech is there. others know or soon will know of this tech. been this way since the first sharp flint weapon was made with opposable thumbs. it remains what fuels international nuclear-proliferation geopolitics. and now AI 2026 reality: “use it, lose it, or become commandeered by it in someone else’s hands.”
3
PMagicUK 2 days ago +3
Humans just make shit up anyway so hardly as big of an issue.
3
pablogott 1 day ago +1
Was Elon Musk saying that, or was he just an investor at the time? https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction
1
EmergencyHorror4792 1 day ago +7
I wouldn't write it off as BS just yet, supposedly through project glasswing which is a cyber security op basically thousands of vulnerabilities were found across many operating systems and pieces of software. It's not just information from Anthropic either you can find third party publications from researchers with access and it hasn't been released because less than 1% of the things found have been patched It would be chaos if they just released it to the public and said stuff it why not
7
_Happy_Camper 1 day ago +7
Yes, note the wording - vulnerabilities, not actionable exploits. There’s of course the kernel of truth, used to create the lie.
7
EmergencyHorror4792 1 day ago +3
Ah very true, that may just be my comprehension of it too and not what they actually found, I guess we'll have to see when more info is released, I do hope you're right though and it's just hype
3
_Happy_Camper 1 day ago +1
If it’s that hyper capable, where’s the evidence of its abilities beyond previous models?
1
sceadwian 1 day ago +3
The next six months are going to be rough as this sorts itself out. I think we're going to get chaos anyways :)
3
InevitableAvalanche 2 days ago +19
Listnook's bias against AI is fine. There is a lot of bad that comes with AI. But Anthropic models are ridiculously good and pre-mythos already was being applied to vulnerability research and showing usefulness. I can believe it that the latest iteration has improved that more. OpenAI's dude is a clown though.
19
LostTheElectrons 1 day ago +10
Both things can be true. This model is likely very good but it will also be extremely expensive to run. Much better PR to say it's too risky to release than admit its too expensive.
10
theDarkAngle 1 day ago +1
their models are just whatever. Their \*tooling\* is insane.
1
MenogCreative 1 day ago +1
are they good? i have been using claude for a while, it fails at nearly anything, i find it useful about 5% of the time and i think thats accidental luck that the model gets something right
1
skeyer2 2 days ago +3
is there a bad side i'm unaware of to claude? i only ever heard about it due to trump earlier this year. i'm using it, and preferring it to chatgpt, frankly.
3
theDarkAngle 1 day ago +2
Also, tech bros really don't like the whole banking system. Need I point out the obvious hazard of: >we made a model that threatens not only your entire industry and but the entire world's stability. You should let that model audit and scan all your security infrastructure right meow.
2
sceadwian 1 day ago +1
Claude has apparently found several hundred in the last few months. That's not dumb that's reality.
1
Darkfight 8 hr ago +1
And the part that's driving me absolutely insane is they never mention how much compute they actually have to burn through to find any of these. Like yes it's probably pretty powerful and can find a lot of freaky stuff we don't know about yet. But at what cost? It's not like they are releasing a model someone could actually run locally. So while they have full control over the compute being used surely it should be feasible to prevent actual big exploits?
1
happyscrappy 1 day ago +1
This article is about that same project. Pretty amazing Anthropic could some how make a positive out of "we've got software that will exploit your own systems without you explicitly telling it do so". That is the behavior of faulty software. Try advertising that instead.
1
Asleep_Document9811 2 days ago
Seems all they've made is a robotic PR agency.
0
_juan_carlos_ 2 days ago
what this is telling me is that anthropic is now doubting their future. New models can be run locally, making the whole subscription model outdated.
0
the_mighty_peacock 2 days ago +1
Wont be long before open source models will become a thing, then it will be a shitshow.
1
Initial-Return8802 2 days ago +4
Open source models are a thing, have you been living under a rock? They're not as good as the frontier models... but the gap is closing
4
the_mighty_peacock 2 days ago +5
> but the gap is closing this is kind of what  i mean
5
LostTheElectrons 1 day ago +1
I would disagree. What this shows is that large compute is still required for the best performance, meaning that local run models are still limited. One reason "local" models are keeping up is because they are being distilled by Claude models. By keeping their best model hidden, Anthropic prevents that from happening.
1
Colon 1 day ago -1
do you think glorified wikigoogle, vibecoding, and customized h***** p*** is all anyone needs..?
-1
Plipooo 1 day ago +1
Yes. ?
1
CoderJoe1 2 days ago +21
Archived link [https://archive.is/sErRE](https://archive.is/sErRE)
21
DukeandKate 2 days ago +8
Is this a real threat or Trump getting back at Anthropic?
8
SneeKeeFahk 2 days ago +4
If you want to lose sleep just look into the "security" of modern day banking. 
4
sarabjeet_singh 2 days ago +2
Could you elaborate ? Educate the ignorant please
2
SneeKeeFahk 2 days ago +5
When you use the numbers on the bottom of a cheque - account, transit, branch numbers - there is absolutely 0 verification/authorize/confirmation before allowing someone to withdraw money from that account. That's just 1 example.
5
scuppered_polaris 1 day ago +2
Haven't seen a cheque in years
2
SneeKeeFahk 1 day ago +5
You've used those number if you ever setup direct deposit for pay or auto payments.
5
cyberianscribe 2 days ago +5
Extraordinary valuations require extraordinary claims.
5
pessimistkonsulenten 2 days ago +3
If it is that powerful, it should be regulated - should it not? We regulate most dangerous stuff to some degree; weapons, explosives, drugs, poisons etc. Why should AI be different if it is this bad?
3
kaminop 2 days ago +3
*”Don’t worry! We got this!”* - Tech-Bros
3
InterstellarReddit 2 days ago +17
No they’re not. They’re already using it against us to further enhance their profit margins. Mythos is available already to JP Chase Morgan. The same bank that paid out millions to avoid going to discovery on a lawsuit that alleged that they assisted Jeffrey Epstein with human trafficking. A non-guilty party, would not have paid 70+ million dollars to dismiss a lawsuit and settle before discovery. They already knew what was coming.
17
nakedlettuce52 2 days ago +7
Who would have thought AI technology could go off the rails?
7
_Soup_R_Man_ 1 day ago +3
How about these corporations spend money on securing our data?? The slap on the wrist fines cost them less money than upping their security against data breaches. Obviously the fines need to get serious and payments to citizens impacted. That might make them actually care. Instead, we'll twist the narritive!!! AI needs regulations!!! What a big fat joke.
3
techniforus 1 day ago +2
This doesn't read like a threat, rather as marketing. I wonder who in the administration financially benefits from this particular LLM company.
2
designbydave 2 days ago
LLMs are not AI
0
sf-keto 1 day ago +2
While the tech industry thinks about it like this: https://cdn.prod.website-files.com/6605b12132f6a8b5d23896d2/67a0f6b7a2f7820c979fa661_AD_4nXeBUuYMj-DUkVgEbKNCN4MoKnerxJh8S0ql3HhbCMSGgab4rvjD_bvsMw3DE682c9jrG6Wj1ESvonNCv2u5LjQkQHRX1H06dP-vk8R5J6cu8Fg0gOnCq819RjceoKXtd2QmrSLq.png ….It’s true that LLMs are not the AI as the public conceives it & as Scam Altman once promised. Claude ChatGPT or Gemini is certainly far from the Star Trek Discovery computer, Zora, or TNG’s Commander Data.
2
Red_Ozarka 1 day ago +2
AI is actually a very broad term. Even simple chess playing algorithms are considered AI.
2
Sea-Horror-5353 2 days ago -3
"Let them fight."
-3
oxfordcommaordeath 2 days ago +10
But it’s *our* money and financial stability they’re risking.
10
StupidScaredSquirrel 2 days ago +2
Pawn is happy game is starting because it might hurt king and queen
2
Torodong 1 day ago
Where's it going to run, the data centres that aren't being built or the satellites that aren't being launched? This pathetic attempt to hype shitware must have been created by "AI". No human could be that stupid without being elected to government.
0
VinylJones 1 day ago
This is the war we need. LLM dorks vs. Finance dorks, give humanity that Thunderdome and watch us world peace this whole f****** planet while they set each other on fire for our pleasure.
0
SpecialKindsSadness 1 day ago -1
Why Is Every Word A Proper Noun
-1
MuTron1 1 day ago +4
>According to most style guides, nouns, pronouns, verbs, adjectives, and adverbs are capitalized in titles of books, articles, songs, and beyond.
4
← Back to Board