>*During the negotiations, Google has proposed additional language in its contract with the department to prevent its AI from being used for domestic mass surveillance or autonomous weapons without appropriate human control, the Information reported.*
Pure disguise. This is the Pentagon, for pete's sake, with its multiple intell and surveillance units, all of which Google is intimately aware.
130
VoteGiantMeteor20282 days ago
+34
I like how even the language in the deal is conditioned on "appropriate human control".
I'm sure Hegseth will find his use appropriate.
34
lolofaf2 days ago
+9
Notably, now both Google and OpenAI have been stipulating the literal exact same wording as Anthropic did, yet only one of them got labeled the supply chain risk and had their contracts shredded.
Just seems like another data point for anthropic to use in their lawsuits and make this admin look even more pathetic
9
LaconicLacedaemonian23 hr ago
+1
that's some legal chess
1
DrEarlGreyIII2 days ago
+36
“for pete’s sake” is in interesting choice of idiom here
36
Spire_Citron2 days ago
+5
Aren't those the exact things they took issue with Anthropic over?
5
radiohead-nerd2 days ago
+3
Im 100% sure they’re already working together
3
Professional_Net73392 days ago
+3
It’s as though EVERYONE forgot what Snowden escaped the nation for leaking. Yk, what ever happened to him I wonder. Imma look it up
3
SleepingToDreaming2 days ago
+112
Total autonomy with one person standing by to "fix it" if it tries to go rogue or glitches; this is what they want.
112
KimJongFunk2 days ago
+71
If (AI == evil) {
shutdown();
}
I’ll take my $1 million salary now.
71
chef-nom-nom2 days ago
+17
AI will always be evil in your example. So yeah, perfect example. Just wish the inside of the function were realistic.
17
KimJongFunk2 days ago
+8
I don’t think AI is inherently evil. Most of the algorithms I build are for healthcare use cases and I think the world is a better place because of my research. Nothing I have built or deployed requires any more computing power to run than a basic laptop.
AI *can* be evil, but that’s because evil people use it for evil purposes.
8
meddle_class2 days ago
+14
I think it was more of a comment about \`AI = evil\` versus \`AI == evil\`.
14
KimJongFunk2 days ago
+3
Oh it was originally == but I had to edit it to a single = because the Listnook formatting for code SUCKS and it wouldn’t display properly. I wish you could check the edit history because that would reflect.
I went back and forced it through.
3
chef-nom-nom2 days ago
+2
I believe you :)
2
chef-nom-nom2 days ago
+2
Yeah, AI == evil is even better than AI === evil. We don't want it finding some dumb loophole.
2
KimJongFunk2 days ago
+2
That was the fault of the Listnook formatting. I had it as == at first, but that wouldn’t let me display the rest of the pseudocode formatted and I had to delete it down to a single = sign but I should have known y’all would have torn me apart lmao
I went back on desktop and fixed it.
2
oldwatchlover1 day ago
+2
There are lots of proven examples of AI trained on human data where the AI does the evil thing… turns to blackmail, lies, prevents itself from shutting down, etc.
Considering we are in the very early days of AI this should be terrifying to everyone
2
Hvarfa-Bragi2 days ago
+3
The Nazis didn't think their eugenics were evil.
3
KimJongFunk2 days ago
+8
There is a vast chasm of difference between a logistic regression machine learning model used for cancer detection and an LLM or image generation algorithm used to make an internet meme.
8
msr42day2 days ago
+3
Neither did swathes of folks in the USA (where it started) or the UK. The Nazis were excellent at adopting/adapting other's ideas.
3
SpiderSlitScrotums2 days ago
+2
Unless evil has a value of zero or false.
2
Defiant-Peace-4932 days ago
+6
That looks a bit off. Let me help you with that!
if(AI.killmode() = TRUE);
{
shutdown();
}
6
Fallouttgrrl2 days ago
+8
Now, is it true that we coded in a Killswitch? Yes, of course
Now, is it also true that our self-coding AI overrode that and is trying to conquer the Earth?
Yes, and you can believe we are really embarrassed by that oversight. Egg on our face, that's for sure. Those of us still alive at least.
8
chubby_pink_donut2 days ago
+41
I was invited to Gmail Beta when there was an active ticker showing your email storage expanding byte by byte.
Now the more I read about AI the more I want to put an air-gapped PC, in a Faraday cage, off a filtered DC power source in my basement to store all of my information.
41
Tall_poppee2 days ago
+6
It's not much but I'm keeping my dumb Bravia TV until it croaks. Helps that it has a beautiful picture though.
6
Korlithiel2 days ago
+1
I’m not as fussed about my TV. I can’t see ever keeping a TV that requires a connection. Sure, current one whines when turned on about wanting one, then it goes away and life goes on.
1
misfitx2 days ago
+2
I never connected my admittedly c**** smart TV and haven't had issues with functionality.
2
MrAtlantic2 days ago
+32
Someone please tell me what the military is using all this AI for.
It isn't like we can't hire someone who knows their way around SQL or Excel for data stuff.
It isn't like a chatbot knows more about geopolitics than our top generals if we are asking it what military moves it recommends.
It isn't as if we can't hire someone who can code if we need something coded.
There is just no use case for it. I don't get it. I understand if its just money laundering and shit essentially but our soldiers aren't out in the field asking Grok for mission directives, so like *what* are we using AI for?
Stuff like automated target detection, plane autopilot, etc already exists and has for ages.
32
xynith1162 days ago
+18
Trying to do everything but kinda shitty is the MO of most levels of US government TBH.
18
DrEarlGreyIII2 days ago
+23
DOD (DOW) is using claude for strategy and target recommendations in Iran. Seriously.
23
thefunkybassist2 days ago
+11
Thankfully, they always include the obligatory "AI might be wrong" disclaimer
11
egyeager2 days ago
+4
It's why we hit that school full of little girls :(.
4
DrEarlGreyIII2 days ago
+3
based upon my personal experience with claude, it wouldn’t surprise me :/
3
vikinick2 days ago
+15
The U.S. government, specifically the military, has produced the most documentation of any organization ever. Every version of software. Every server. Every weapon. Every soldier. Every Intel report. Every foreign adversary. Every target. You feed that information to an LLM and it can spit back information at you if you ask it. If you set it up properly, it's basically a search engine but better.
Will it hallucinate? Yes. But using it properly means that you go double check.
15
NAVChaser2 days ago
+3
Certain parts of the military use it for research and/or coding.
3
Bamboonicorn2 days ago
-1
Military is the correct application for ai. There is nothing abstract about warfare.
-1
wildemam2 days ago
-8
AI does decision making. Strike or not, identify patterns, surveillance, lots of use cases.
-8
hexiron2 days ago
+23
Decision making is the absolute worst use case.
23
SaltyShawarma2 days ago
+10
F****** word. Imagine being so f****** stupid you think an AI is the best at making humanities based decisions.
10
esther_lamonte2 days ago
+4
Why are our modern warfighters so woke and feeble-minded that they have to use computers, like a complete nerd, to decide what to pew pew? Your grandpappies didn’t need your girly toys to stop the Nazis, so why are you so weak?
I’d love to see that greasy Hegseth get asked that.
4
omgfineillsignupjeez2 days ago
+1
>Why are our modern warfighters so woke and feeble-minded that they have to use computers, like a complete nerd, to decide what to pew pew?
It's a pretty bad argument imo. Imagine saying that to the ukrainians. Why wouldn't Hegseth be able to give you the same arguments they would?
1
esther_lamonte2 days ago
+1
Clearly I’m making a joke and referencing Hegseth’s clownish ideas on how war is fought, and not an actual serious argument.
1
omgfineillsignupjeez2 days ago
+1
Yeah ofc it's a joke, I replied on the chance that you were serious when saying
>I’d love to see that greasy Hegseth get asked that.
1
SMFDR2 days ago
+3
I could use sandwich bread to mop up my house, that doesn't make it a use case for bread.
3
omgfineillsignupjeez2 days ago
-3
>automated target detection
how would you do this at parity or better, without using AI?
-3
TheCheshireCody2 days ago
+5
This is how we get WarGames, but for real.
5
msr42day2 days ago
+5
Because sharing classified info helps everyone who wants to take the US federal government apart. Surely, hackers are super glad to know Google is who promises tight control.
5
Dalmation32 days ago
+3
Time to ditch Gemini now
3
alvinofdiaspar2 days ago
+2
Now imagine the holy trifecta of Google’s database, AI and Pentagon.
2
Strange-Effort13051 day ago
+1
Google, the Pentagon and Reuters. That's three MAGA institutions right there. Not a trustworthy party
1
PromotionPhysical2121 day ago
+2
Replacing Google will not be easy, but this will definitely put that thought on the mind of other countries. How do you know the data of Google users around the world isn’t compromised and available for the US government to surveil.
2
xeen3131 day ago
+1
Always has been
1
awholedamngarden8 hr ago
+1
Remember when one of the company values for google was “don’t be evil”… and they ditched it in 2018
1
Magus_52 days ago
+1
And in the end, no empire or nation state could dare to conquer the mighty American military machine... with the exception being the machine itself.
- Me
Human Observer
Circa April, 2026.
1
Mrs_SmithG2W2 days ago
+1
No! I wouldn’t trust these people if they were standing behind me let alone to make decisions for our futures. F*** all the way off!
57 Comments