This is huge. The judge basically said the government cant just slap a national security risk label on a company without solid evidence. Big win for due process.
20
GlobalistCabalMar 27, 2026
+5
Our democracy is literally on Judicial life support. Thank the Bald Eagles for principled judges holding back the flood of fascism.
5
wiredmagazineMar 26, 2026
+4
Anthropic won a preliminary injunction barring the US Department of Defense from labeling it [a supply-chain risk](https://www.wired.com/story/anthropic-sues-department-of-defense-over-supply-chain-risk-designation/), potentially clearing the way for customers to resume working with the company. The ruling on Thursday by Rita Lin, a federal district judge in San Francisco, is a symbolic setback for the Pentagon and a significant boost for the generative AI company as it tries to preserve its [business](https://www.wired.com/story/anthropic-claims-business-is-in-peril-due-to-supply-chain-risk-designation/) and reputation.
“Defendants’ designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law and arbitrary and capricious,” Lin [wrote](https://storage.courtlistener.com/recap/gov.uscourts.cand.465515/gov.uscourts.cand.465515.134.0.pdf) in justifying the temporary relief. “The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.”
The Department of Defense, which calls itself the Department of War, has relied on Anthropic’s Claude AI tools for writing sensitive documents and analyzing classified data over the past couple of years. But this month, it began pulling the plug on Claude after determining that Anthropic [could not be trusted](https://www.wired.com/story/department-of-defense-responds-to-anthropic-lawsuit/).
The administration ultimately issued several directives, including designating the company a supply-chain risk, which have had the effect of slowly halting Claude usage across the federal government and hurting Anthropic’s sales and public reputation. The company filed two lawsuits challenging the sanctions as unconstitutional. In a hearing on Tuesday, Lin [said the government](https://www.wired.com/story/pentagons-attempt-to-cripple-anthropic-is-troublesome-judge-says/) had appeared to illegally “cripple” and “punish” Anthropic.
Lin’s ruling on Thursday “restores the status quo” to February 27, before the directives were issued.
Read the full story here: [https://www.wired.com/story/anthropic-supply-chain-risk-designation-injunction/](https://www.wired.com/story/anthropic-supply-chain-risk-designation-injunction/)
4
Actual__WizardMar 26, 2026
+5
Good, it wasn't true. I don't like the company, but that part isn't true.
5
Microtom_Mar 26, 2026
+1
Bro, wtf, why wouldn't you like this company? It's the only company developing AI that gives a damn about morality, and developing AI is the most important thing. I don't f****** want to spend my life working, we absolutely need to automate labor in general. If there's a single company you should like, it's this one.
1
blazesquallMar 27, 2026
+6
> It's the only company developing AI that gives a damn about morality,
In what way?
> I don't f****** want to spend my life working
Spoiler..
Automation = Massive unemployment and increased wealth disparity.
Without a radical shift in economic policy, automating all labor will just as easily lead to a crisis. Buckle up.
6
Microtom_Mar 27, 2026
+2
Sure, but the lack of fair wealth distribution means that we should demand fair wealth distribution, not stop automation.
2
ugh_this_sucks__Mar 27, 2026
+2
You think the government that doesn’t even want to give you healthcare is going to wake up one day and decide to redistribute wealth?
The US has literally dropped acid on little babies in Vietnam to stave off any inkling of wealth distribution.
2
blazesquallMar 27, 2026
+2
Shouldn't we demand that.. now? Ahead of time?
2
Microtom_Mar 27, 2026
+1
I want to say that there's true injustice and even criminality that leads to people not receiving their fair share. The time is always right to make demands. There's no reason to pause anything, however.
People aren't the brightest, though. They vaguely feel that something isn't right, but can't say clearly what the problems are. I'm certain that AI will help in that matter.
1
blazesquallMar 27, 2026
+1
> I'm certain that AI will help in that matter.
In an accelerationist manner, yes.
1
Unlucky-Bunch-7389Mar 27, 2026
+2
People just don’t like ai - it’s annoying
If they lived during the Industrial Revolution they would have complained every day about the horrible companies making automation in textiles
2
Actual__WizardMar 26, 2026
>Bro, wtf, why wouldn't you like this company?
A personal difference of opinion about their "marketing strategy." I'm fully aware of a technique that was even taught by the CIA, called "competitive analysis" (aka they copy cat their competitors marketing tactics) and that's not how to run a company... That's spy games stuff... That's Fox News stuff, not AI company stuff... These big companies are "suppose to do their own thing so they stand out."
>I don't f****** want to spend my life working, we absolutely need to automate labor in general.
Me neither, that's why I was a spammer back around the year 2000. I spammed out so many d*** pill emails that it was absurd. I mean I was getting serious legal threats every other day. It was hard work to send that much email spam dude... And now I'm starting an AI company, so I have to send like 1000x that... Holy cow dude... That's like Mark Zuckerberg levels of junk email dude.
0
ugh_this_sucks__Mar 27, 2026
+1
Bro, wtf, why wouldn’t you like BabyCrusher Inc? It’s the only company that gives a damn about not crushing adults, and only crushing babies.
1
this_my_sportsredditMar 27, 2026
+2
ok supreme court, you know what to do.
which is whatever the f*** trump tells you, apparently.
2
Suspicious_Peak_1173Mar 27, 2026
+1
Thank the Lord.
1
AMCorBUST2021Mar 27, 2026
+1
While Iran thing seems like it was planned by ChatGPT.
Taking over Iran sir is a great idea. And that’s rare.
17 Comments