AI saves you so much money that you can lay off employees
AI costs so much money that you have to lay off employees
Sounds great
1488
OhuiginMar 15, 2026
+260
While they outbid our public utility companies for our water and our electricity.
260
snowflake37waoMar 15, 2026
+90
And lobby for children online safety laws and age verification scams with millions they made from our data already
90
ohmyblahblahMar 15, 2026
+35
tan direction encouraging head ad hoc amusing sable smell summer childlike
35
Dwarfhole243Mar 16, 2026
+1
Hold up, that’s where the water issue comes from? How is that allowed??
There should be a guaranteed amount of water for public use before any company can get any.
1
MormanadesMar 15, 2026
+85
We are in a recession, soon depression and as AI is failing to essentially carry the entire economy, we soon enter a depression.
Brace for impact, its coming.
85
Tall-Bell-1019Mar 15, 2026
+8
"Ah shit, here we go again"
8
VeryNoisyLizardMar 15, 2026
+15
what number of "one in a lifetime crisis" would this be for millenials? 4th? 5th? though this one has the worst outlook compared to the previous ones, Ill give it that
15
czs5056Mar 15, 2026
+7
Just economic or overall? If just economic, we got dot com, 2008 housing, covid shutdown, and this.
7
Maine_Made_AneurysmMar 15, 2026
+9
You can't tell me the gov is just gonna let AI or specific companies go bankrupt.
The government will bail them out and then fully consume whichever company is the most successful/beneficial to the government.
With trump we can just say he'll get his cut from the highest bidder. If the companies can keep going like this until the next electorate, then it'll be wallstreet all over again.
They'll get slaps on the wrist while thousands of folks lives get completely fucked up.
9
Poison_the_PhilMar 15, 2026
+21
The goal is for there to only be the owner class and one big peasant class. They *want* to crash the economy. They want one last, permanent rug pull to really truly divide society.
Peter Thiel’s (JD Vance’s owner) original intention with PayPal was to replace the dollar. He wants to literally reboot and replace the US with a patchwork of sovereign corporate city-states.
21
punter75Mar 17, 2026
+1
what government bailout in the last 20 years has resulted in any government ownership of the company? theyre just giving shit away for free
1
TailballMar 15, 2026
+3
You mean revolution, right?
3
devonhezterMar 15, 2026
+2
What to do ? Stockpile cash and move to new zeland ?
2
TheBonesmMar 15, 2026
+36
At this point I imagine tech layoffs are due to the economy conditions, but they blame AI instead
36
okram2kMar 16, 2026
+2
and the ironic thing is the only thing preventing a full scale recession is the AI bubble
2
tech240guyMar 15, 2026
+18
Also companies that are consuming AI will be in for a nice treat when AI services go up in price once it becomes a "standard". Not Netflix bad, but can be. A lot of enterprise software are already designing the solution to leverage AI rather than their own design.
18
metametapraxisMar 15, 2026
+33
It will be far, far worse than Netflix. Claude Code $200 plans are thought to cost anthropic about $5000 in compute.
33
Absolute_EnemaMar 15, 2026
+4
Sounds like predatory pricing.
4
metametapraxisMar 15, 2026
+6
Welcome to the USA, 2000 onward, with no regulation.
6
callmesandycohenMar 15, 2026
+8
Meta is always chasing trends now, spending exorbitant amounts of money and ultimately resulting in failure. I cannot name the last truly creative innovation they’ve had.
8
wolfannoyMar 15, 2026
+2
Oh there doing more than Just chasing trends. They p****** a lot of content from books to train their ai. word going around. They might have been funding some of the online age verification cases.
2
phil_the_builderMar 15, 2026
+2
Maybe they could try laying off AI. 🤔
2
0xF00DBABEMar 15, 2026
+1
Paying hundreds of millions per year to individual employees is f****** insane.
1
HrmerderMar 15, 2026
+1
The duality of ma… I mean.. AI
1
old-skool-broMar 15, 2026
-3
If it was cheaper to use people, they'd use people...
-3
AggressiveSkywritingMar 17, 2026
+1
We're finding out that this is just objectively not true lol
1
Dangeresque300Mar 15, 2026
+173
"This thing we created as a c**** replacement for our employees is becoming too expensive to maintain. We better fire the rest of our employees!"
Like, what? What is the logic here?
173
SnoopsBadunkadunkMar 15, 2026
+30
My phone won’t translate TFA to English for some reason, but … most likely AI washing. They’’ll go on a hiring spree in Asia and spend the difference on more chatbots. They’re desperate to stay relevant and keep their image to Wall Street as a growth stock, better get in on the shiny new thing that’s keeping the dumb money flowing. Only, the chatbots can’t do the jobs, and won’t anytime soon.
30
JC_HysteriaMar 15, 2026
-10
The logic is pay top performers and shareholders more over time
-10
moritsuneeMar 15, 2026
+288
Just watching all these companies tailspin with all their sunk cost into AI and ending up not knowing how to use it as anything more than a crappy chat bot that hardly contributes to their daily operations.
But of course the employees must go when something's gotta give.
288
sammystevensMar 15, 2026
+152
Extractive AI is great. Pulling features out of unstructured data, finding patterns, etc.
Generative AI is a mixed bag of lies and hallucinations.
152
BigRedRobotNinjaMar 15, 2026
+53
Thank you. AI is not a single product, it's a process for creating tools. Some of those tools are legitimately revolutionary, potentially world-changing. Some of those tools are "plausible bullshit" generators. None of those tools are even close to the AGI overlord of Silicon Valley fever dreams.
53
itsatumbleweedMar 15, 2026
+11
The companies that are taking the time and figuring out which parts of their workflow are best suited for AI are going to come out on top. Most of the time that will involve making people more efficient so that you can bring in more customers without expanding- there may be some redundancies but wholesale layoffs are usually not the product of using AI right.
The companies that are using it because everyone is and so they should, and who believe it's a panacea are going to lose a lot of institutional knowledge and have a very compelling broken product.
This has at least been my prediction for a while, and it's looking truer the more we learn about AI. If you're looking to see if a particular company is using it right, look for the ones that are very well aware of its limitations.
11
camelCaseCoffeeTableMar 15, 2026
+1
I 100% agree. We’re at an 80/20 place with AI, I believe. Or at least with AGI and AI that can replace humans.
We’ve made it 80 percent of the way, but in only 20 percent of the time. The next 20 percent will take 4x as long to complete though.
AI is very useful, but it’s not replacing humans anytime soon. There’s so, so many little, weird errors there. And they’re constant and not going away really. Learning how to harness the 80% of AGI that we have, though, is going to be the key to winning, at least until everyone figures out how to harness it
The other thing is, we may stagnate here a bit. As AI costs come due, these companies won’t be able to keep spending. They’re gonna have to either raise costs and show the true cost of these tools to people, or the next arms race becomes cost reduction rather than pure performance. But the costs are untenable. I don’t know how long they can continue to subsidize this so heavily, but I know it’s not forever — they have to pass them on to us, lower them for themselves, or some combo of both
1
afoxboyMar 15, 2026
+3
we don't have AGI. that's not a thing.
3
UncommonalityMar 17, 2026
+1
Also LLMs will never lead to AGI. An intelligence is more than a language synthesis machine, it needs the ability to think and plan and remember and imagine and hypothesize. We don't even have the proper words for 90% of the components it would need, let alone any idea of how to create those components.
1
DarthEinsteinMar 16, 2026
+1
Respectfully, we dont have anything remotely close to AGI.
AI chat bots are not intelligent, they are random bullshit generators that happen to be very good at pretending to be intelligent. The technology is just fundamentally entirely unrelated to what actual AGI would look like.
1
camelCaseCoffeeTableMar 16, 2026
+1
I didn’t say we had anything close to AGI, I said we’ve hit an 80/20 problem.
I’d encourage you to read some technical papers on AI though if you think all it is are bullshit generators. I don’t know if reading or understanding those are in your wheelhouse or not (not meant to be offensive, it’s a complex and heavily technical topic, many people wouldn’t be able to understand a lot of these topics), but the smartest minds in the world aren’t quite sure how these tools work.
Yeah, at the highest of high levels, they’re predicting what comes next. However, when you dig in, there’s way more nuance than you may realize. There’s evidence of planning ahead that takes place, there’s evidence of multiple independent chains of thought being combined together to arrive at an output, etc.
What we currently have is not AGI. I’m not claiming it is. But it’s a massive leap forward, and to characterize it as “a bullshit generator” demonstrates either a lack of knowledge in the subject or intentional misleading word choice.
1
BlapooMar 15, 2026
+5
I'm a developer in the middle of this shitstorm. You are 100% correct. LLMs are tools that can be used for truly world-changing good. But in our late-stage capitalistic hellscape, the only thing our leaders can strain their pea-brains to think up is *cut costs*.
It's up to the rest of us to implement this new technology for good, but all anyone can see is the bullshit the corporations put out that makes splashy headlines. It sours the term "AI" to the point that most people have made up their minds and there's no space for any good faith actors to find a foothold in public perception.
Simply put, _it_'s a new tool that hasn't been implemented and integrated into reality properly yet. Lots of people hitting themselves with hammers and then blaming the hammer.
5
UncommonalityMar 17, 2026
+1
Honestly I don't even know why they want to make AGI. AGI would immediately progress to ASI and kill them all
1
Wooden-Post-3080Mar 15, 2026
Can you teach me about what AI should add to my resume that's actually useful?
0
idonteven93Mar 15, 2026
+10
Machine Learning and its sub parts, what they call discriminative AI now. Supervised Learning, Unsupervised learning and reinforcement learning are the sub parts.
Checkout Deeplearning.ai course on Machine Learning it’s great.
After that you’ll ask the question „Why do this with GenAI there’s a better way“ for like 90% of problems people wanna solve with ChatGPT.
10
MoleMoustacheMar 15, 2026
+3
Yes. People who are less educated in what it is think that an LLM is all AI, and can be applied to every problem without realising all it does is generate the text it thinks you want to hear. It is not a general problem solver, but can solve very specific problems related to text.
3
idonteven93Mar 15, 2026
+1
Yeah exactly. When the task is "generate new content" it probably is the only solution for you. If it's anything else (categorize, find patterns, extract data) you probably have other, better and cheaper options.
1
IgneousPutoriusMar 16, 2026
+1
Generative AI is not just LLMs though - there are plenty of good uses for learning distributions. I know generative AI has a negative connotation now but it literally just means learning a model you can generate samples from.
1
ZanderMFieldsMar 15, 2026
-3
>>Extractive AI
Is that what has been helping me grow really great cannabis for c****? Because aside from the occasional hallucination ChatGPT has been biologically accurate *and* good at finding frugal-value items to add to my watering can.
-3
terranyMar 15, 2026
+11
Always on the employee to perform better or be more efficient. Never any blowback for investing in the poopooverse or shitLama
11
gnimshMar 15, 2026
+43
Does anyone know if they can just stop developing AI and pay their employees instead?
43
_head_Mar 15, 2026
+13
The rich leaders of the companies say no
13
ColdNotionMar 15, 2026
+11
They genuinely can’t, and it’s a serious economic problem. They’ve made massive spends on the GPUs, data center infrastructure, and marketing for AI products, none of which they can back out of without losing an absolute ton of money. Making matters even more troubling, a lot of their bookkeeping is being done with the assumption that there will be a profitable AI sector in the near future, which means the finances in this are actually *worse* than they look at a glance. Backing out now might be financially prudent, but it would tank Meta’s stock, and possibly trigger a collapse of AI-related stock prices generally, which are massively overinflated. All of these companies want to ride the AI wave for as long as possible, benefiting from the stock price bumps that come with it, and let another company be the one who gets the blame for popping the bubble.
11
TeamWorkTomMar 15, 2026
+28
This sounds like the start of the AI bubble burst.
Wishful thinking is more likely.
28
Actual__WizardMar 15, 2026
+60
So, the costs of something that people don't want is going up, so they're going to delete the value out of their company to pay for it? So, you're going to get "worse products" so they can "develop a product that you don't want."
Why on Earth did a social media company get involved with AI when they know absolutely nothing about AI?
That seems like an absolutely catastrophically bad mistake...
Maybe they should try something that they're capable of, like another 'hot or not' website or something...
60
KoolalaMar 15, 2026
+17
They are an ad company. It is used for targeting ads.
17
matrinoxMar 15, 2026
+18
They did hire an amazing AI person a decade ago but then effectively replaced him with someone who knows very little about AI while spending billions doing so. Very big brain /s
18
MoleMoustacheMar 15, 2026
-3
Sarcasm tags ruin all sarcasm.
-3
hippofumesMar 15, 2026
+5
They're a necessary evil in a world where some genuinely-held beliefs are so patently absurd that it necessitates differentiation over text medium, where you don't have the nuance of tone and facial cues.
5
Actual__WizardMar 15, 2026
-5
Yeah real AI researchers had to do a massive analysis that involved separating the "philosophy out of AI and putting the science back," so that it's possible to make forwards progress again. So, some of us are aware of "how little they know about what they're doing as we can clearly see that they made mistakes because they followed the advice of philosophers instead of scientists." That's "not how math works." The math is legitimately wrong and that's why it hallucinates.
-5
darthlincoln01Mar 15, 2026
+1
That's for sure a theory on the current limitations, but I'm not sold on the problem being that language itself is incapable of doing what they're attempting with AI.
I can't really explain it, because.... that's the problem. However consider there are people that exist who claim they have no inner dialog at all and they can navigate this world without language dictating their actions. Moreover there are obviously animals in this world that can understand and make complex decisions about reality without any language at all.
It can be argued that math is not language, so perhaps there are gains here, but I would argue that math can not fully explain reality either. Math can not explain love or comedy, for example.
1
EatinSumGrapesMar 15, 2026
+7
A social media company investing in a product that removes the true social nature of a product. Maybe they see it as putting their eggs in... opposing baskets?
7
JayBird1138Mar 15, 2026
+23
Well, to be fair, they did introduce a couple of open source models that were pretty good for its time.
But now, I think they are all drinking their own Kool-Aid, and actually buy into their own hype that 'AI' (LLMs actually) is the pathway to the future.
Also, keep in mind, Zuckerberg is not very clever. He copied (stole) the idea of Facebook. His own idea of metaverse (technically a copy of playstation home?) was a flop.
This will be another flop because they fundamentally don't understand the technology and its inherent limitations.
23
Actual__WizardMar 15, 2026
+3
>Well, to be fair, they did introduce a couple of open source models that were pretty good for its time.
Not really, they tricked people. It's not AI. It doesn't understand anything. So, it's just a plagiarism parrot like scientists have pointed out multiple times.
>Also, keep in mind, Zuckerberg is not very clever. He copied (stole) the idea of Facebook. His own idea of metaverse (technically a copy of playstation home?) was a flop.
Yeah he's a "copy catter." He just copy cats other people's work, figures out a way to make gigapiles of money from it, and by turning it into a giant scam. Then apparently he covers up the crimes that are being investigated by law enforcement, because he just got caught doing that in Japan. So, he's a criminal.
>This will be another flop because they fundamentally don't understand the technology and its inherent limitations.
The main problem with LLM tech is: They don't understand what language is in the first place. They think "people using words is language. That's language usage..." They're unaware that it's a standardized system because they refuse to read the history of language creation... They're so "anti education that they refuse to look there." So, they're legitimately building language technology with no knowledge of language at all... They have absolutely no idea what they are doing and they are way off course...
I assure you: There is nobody at Meta that can even accomplish the extremely basic task of classifying words by their type. Obviously, they have no entity detection algo, so, I don't even know what they're doing, because that's how the language works... You're just using words to describe things, that's "how it works." So, they don't have a system to get "the things" or "the words that are used to describe the things." Which, that's a big problem, because "that's the whole language." "That's what human communication is..." So they have nothing...
To be clear: You were taught how to do that in kindergarten... It's a two step process. They teach students objects (entities, which are basically just nouns, so "apple", "cat", and many more) and then they teach the students how to describe the properties of the entities. So, "it is just a math equation." First students learn A+Entity (big cat), then they learn A+B+Entity (The young boy), and eventually, A+B+C+Entity (the little young boy). Then the process just repeats three or more times with the (S,V,O) order to form a complete sentence. Because, to save energy, humans combine, typically 3, clauses into one sentence.
3
the_nobodysMar 15, 2026
+6
But it's true that you don't need to know the classical components of the language you speak in order to use language and communicate. I'm not defending chatbots or LLMs, trust me, but it sounds like you're saying it's bogus because it doesn't "comprehend" something it doesn't actually need to know?
6
Downtown-Elevator968Mar 15, 2026
+1
Yeah it’s become quite obvious that Zuck just copies ideas, throws a bunch of money and resources and hopes it sticks. It worked out well for Facebook, not so much for the Metaverse and his AI efforts.
1
IKillZombies4CashMar 15, 2026
+3
They got involved in AI because they got involved in VR and lost money
3
mountaindoomMar 15, 2026
+5
Can Meta products get any worse?
5
Actual__WizardMar 15, 2026
+6
Yeah well, after people found out their cool AI glasses are really just spy cameras for Mark, you know, I don't know. I think people are starting to get sick of being lied to, but I'm not sure yet. Maybe a few more scams from big tech and then people will finally figure out that it's a bunch of crooks.
They don't even create normal products anymore, it's always some kind of weird trick. I don't even think these companies know how to create a decent product anymore at all because they're going to start the product creation process with some kind of weird scam, that they're trying to hide in the product. So, "the problem their product solves, is obfuscating their weird scam and not actually solving a problem for their customers." That's exactly what LLM tech is... The problem they're hiding is that it's not AI, it's a plagiarism parrot... They thought that by jamming everything into a matrix, that "it's a black box and we can't figure it out." Newsflash: Scientists figured it out. They're a bunch of crooks and liars... What's going on is absurd and those people should be arrested for fraud...
6
ChrmdthmMar 15, 2026
-2
Know absolutely nothing about AI? They started FAIR, which was considered one of the best AI research labs in the industry, over a decade ago. If you were in the tech space, you would know how prestigious FAIR was.
-2
Actual__WizardMar 15, 2026
+2
>They started FAIR, which was considered one of the best AI research labs in the industry, over a decade ago.
That has no relevancy to the current discussion.
But, to "be Fair" pun intended, uh, what they produced sucks. So, I don't know why you are even bringing that up.
Compared to graph based techniques that I've seen both, being actively demoed, and with more coming, obviously, what they produced is not "correct." Personally, I'm thinking "beyond graphs", but you know, that's me. I mean obviously when people are soloing these projects and then you compare that to what they were trying to do, clearly they were wasting their time and their parent company's approach to, honestly basically everything, is terrible.
2
oneonusMar 15, 2026
+48
I hate AI, it's not only taking away jobs, but it's horrible for the environment, with their data centers taking all of our water and consuming huge amounts of power. Case in point:
https://techiegamers.com/texas-data-centers-quietly-draining-water/
Larger data centers can each “drink” up to 5 million gallons per day, or about 1.8 billion annually, usage equivalent to a town of 10,000 to 50,000 people.
https://www.eesi.org/articles/view/data-centers-and-water-consumption
In addition, data centers are power hungry and produce drone noise and vibrations for surrounding areas, as well causing local utilities rate to spike by 50%. With massive tax breaks for the richest companies.
[Exposing the Dark Side of AI Data Centers](https://youtu.be/t-8TDOFqkQA?feature=shared)
Business Insider exposed that the power needs of data centers have forced some states in the US to withdraw from their carbon emissions targets. Power companies are even looking to extend the life of coal and gas plants to help meet the unprecedented demand.
Finally, to make things even worse, [Data centers turn to commercial aircraft jet engines bolted onto trailers as AI power crunch bites ](https://www.tomshardware.com/tech-industry/data-centers-turn-to-ex-airliner-engines-as-ai-power-crunch-bites)
48
NYCinPGHMar 15, 2026
+10
Is it just me, or is the link to the article in Arabic, with no way to change language?
Anyway, here’s the link to the article in English:
https://www.reuters.com/business/world-at-work/meta-planning-sweeping-layoffs-ai-costs-mount-2026-03-14/
10
stp875Mar 15, 2026
+8
Over 100 comments and not one person even bothered to open the link lol.
8
cant_find_me_hereMar 15, 2026
+3
I thought I was going crazy
3
texasguy911Mar 15, 2026
+7
Looks like employees are looking at layoffs no matter what. If AI is c**** and replaces them - layoffs, if AI is expensive, layoffs. Basically, if the sky is blue, layoffs.
7
SeasonElectrical3173Mar 16, 2026
+1
Good. Now they get to cash in on the days they were cracking jokes about not giving a shit about the people outside of their tech bubble that were losing jobs because of the shit they worked to build.
I could care less that these tech brats are losing their jobs. They deserve it more than anyone else in the US workforce.
1
ControlLayerMar 15, 2026
+16
So the company that lost billions to “build the metaverse” (including rebranding itself) has lost too much money after going all in on ai? Shocking.
16
hammackjMar 15, 2026
+9
Claude do me a solid build a Facebook clone scale to 100m users call it friendz with a z. Make no mistakes.
9
Necessary_Sir_5079Mar 15, 2026
+10
Imagine how much they would whine if they actually paid their own power bill.
10
VeryRareHumanMar 15, 2026
+8
They lost money in AI...so the lay offs. Who is using their AI outside of Facebook and instagram?
8
CaveMantaMar 15, 2026
+9
Bubble, bubble, the AI bubble is in trouble.
9
pcurveMar 15, 2026
+5
I didn't know Meta was still doing AI.
5
Blue_HyperGiantMar 15, 2026
-3
Meta is a leader in AI. They continuously create, then open source, LLMs under the llama name.
They also do a lot of AI that's not LLM related.
-3
invalidredditMar 15, 2026
+2
Such great leadership at Meta...
2
End_Awakeness451Mar 15, 2026
+2
Takes a lot of money to design an AI that can fake being a regular person and look at and click on ads.
2
fangtingwrongMar 16, 2026
+2
Where are the AI will create jobs peope?
2
SeasonElectrical3173Mar 16, 2026
+2
Good riddance. Tired of being made to feel bad for tech people helping out a company that wants to speed run the end of the human labor force. They are all complicit, whether they worked directly in AI or not.
Why would I feel bad for some overpaid employees of a company they all know works to harm and make tech addicts of young people, and society as a whole.
2
HalloqweenMar 15, 2026
+4
They have 8 billion to build a single AI data center but no money to pay employees
4
f12345abcdeMar 15, 2026
+1
first metaverse, now AI
1
tech-slackerMar 16, 2026
+1
So more or less AI is demanding additional benefits and getting them.
1
DogsAreJustTheBestMar 15, 2026
+1
Meta is an advertising company, that runs social media platforms to host that advertising. That is all.
Everything else is just fluff that does not make money for them, and just acts as hype to keep the stock over valued. The advertising makes so much money that it can prop up all these pointless diversions.
1
Lynda73Mar 15, 2026
+1
When are we going to start charging an AI tax to companies who replace humans with AI? The money can go towards UBI.
1
DrBunsonHoneyPooMar 15, 2026
+1
This is just asinine at this point. If only we had a government who could say maybe limit A1?
92 Comments