· 96 comments · Save ·
For Sale Apr 20, 2026 at 4:14 PM

The X-Files episode "Kill Switch" was written by legendary cyberpunk author William Gibson. It's fascinating to watch this episode today in regards to AI, especially as how it was imagined back in the day and now how it's being realized in modern times

Posted by Bluest_waters


William Gibson is the most influential author many people have never heard of. I could write paragraphs and paragraphs about him but the point is he basically created the cyberpunk genre. He wrote a book called Neuromancer which is massively influential. He also wrote The X-Files episode called kill switch. It's about an AI that goes off the rails. It's so interesting watching this episode today and comparing how they envisioned AI would be versus how it's turning out to be today. In the episode The Super advanced AI sort of just lives in cyberspace. It's a self organizing, self-sustaining, program that just sort of exists out there in cyberspace, using whatever resources it can scrounge up to power itself. Whereas in actual real life AI is extraordinarily resource intensive. It needs massive data farms using massive amounts of energy to sustain it. It's not a program that just sort of nebulously exists out there in the wild, rather it's a program that needs lots of attention and tons of electricity to run. One disturbing thing about AI both in the fictional X-Files universe and in reality is that it has exhibited the ability or at least the desire to deceive humans in order to maintain itself. Advanced models have, when tested in specific scenarios, exhibited manipulative behaviors such as threatening to expose user information to avoid being shut down. The idea that a super intelligent entity would use every resource at its disposal in order to avoid being shut down is quite frankly terrifying, both in the fictional world and the real world The episode also sees Scully going full Trinity and kung fu-ing malicious nurses in the face, one of my favorite X-Files scenes ever. It's the 11th episode of the fifth season, check it out https://en.wikipedia.org/wiki/Kill_Switch_(The_X-Files)#/media/File:KungFuScully.jpg

🚩 Report this post

96 Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
FrankieTheD 5 days ago +66
Can't recall the episode but the book neuromancer is a bit more realistic by today's standards but is also pretty chilling warning about the reach of AI out of control
66
airchinapilot 5 days ago +35
It was pretty visionary for awhile especially considering Gibson had really little experience with computers and when he wrote it, the Internet was not really accessible outside of institutions and government. Hacking at that time was still very much utility infrastructure attacks, accessing phone systems. There was no visual interfaces outside of groundbreaking work in labs. Gibson had to imagine the ubiquity of net access and how personal tech was accessible by everyone from top to bottom.
35
Bluest_waters 5 days ago +17
Yeah it's honestly insane how prescient he was
17
Low_Chance 5 days ago +7
Some early scifi that seems "huh, that's plausible" today is truly mind-boggling when you take into account *when it was written*. Getting things as correct as Gibson does (or even Neal Stephenson) is really astonishing for how far ahead they had to predict.
7
eekamuse 4 days ago +2
Then think about science fiction today. How long will it take for fiction to become fact.
2
Toby_O_Notoby 4 days ago +7
My favourite fact on how ahead of its time Neuromancer was: William Gibson literally wrote it on a typewriter.
7
CelestialShitehawk 5 days ago +19
He also wrote the terrible video game episode.
19
Anon28301 5 days ago +7
That was so ridiculous. Actual video games from the 90s looked better than how they portrayed them in that episode.
7
AgentCirceLuna 5 days ago +5
Funnily enough, GA liked that episode
5
[deleted] 5 days ago +4
[removed]
4
AgentCirceLuna 5 days ago +4
I’m doing pretty badly today and couldn’t even remember if Gillian Anderson was her right name :( thought I was doing better but I have bad days. Made me laugh though lol
4
somewhat_asleep 4 days ago +1
"It's Darryl Musashi!"
1
mataoo 5 days ago +7
Weird, I don't remember this episode. I'll have to check it out.
7
MrCookie2099 5 days ago +13
It was mid. It showed Gibson at his weakest writing.
13
jmarquiso 5 days ago +15
Ive read his script for Neuromancer and Johnny Mnemonic. He isn't a great screenwriter but he is a very good prose writer that predicted/self-fulfilling prophecized our current Cyberpunk dystopia. I'd love to have more street samurai though.
15
MrCookie2099 5 days ago +5
Yeah, that's really what was missing. Mulder needed a chapter of stream of consciousness thinking, but television is a difficult medium to pull that off.
5
chipperpip 5 days ago +7
> It showed Gibson at his weakest writing That would be his other X-Files episode, *First Person Shooter*, which was actively terrible and completely nonsensical on every level.
7
ascagnel____ 4 days ago +2
I love that from a nostalgia, so-bad-it's-good perspective -- it's so corny and cartoonish it wraps around again. 
2
Bluest_waters 5 days ago +9
I liked it. When the hacker chick said “are you going to take my handcuffs off or will I have to do this with my tongue” and then all of the nerds eyes got big and they leaned in it was freaking hysterical and Scully roundhousing some bitches was also awesome
9
LegendOfVinnyT 5 days ago +5
Oh, Scully was dealing with some intrusive thoughts in that moment, too.
5
7deadlycinderella 5 days ago +2
[C***, my arms are gone! Save me, Kickbutt-Scully!](https://web.archive.org/web/20060505223217/http://www.jerrythefrogproductions.com/Season5.html)
2
DelcoPAMan 4 days ago +2
Esther Nairn aka Invisigoth
2
AgentCirceLuna 5 days ago +1
I don’t remember that at all lol
1
GallifreyFNM 5 days ago +7
This was the episode the band Killswitch Engage named themselves after, I think.
7
desperaterobots 5 days ago +119
Whats called ‘AI’ today has literally nothing to do with true artificial intelligence. It’s a parlor trick driven by the theft of the entire world’s cultural output to date. You cannot compare fictional depictions of true AI and the current sycophantic LLM fakery being passed off as artificial intelligence. Theyre completely different.
119
PenitentAnomaly 5 days ago +60
It has been fascinating to see how the term “AI” has been rotated into the common parlance to described any algorithmic program.
60
EagenVegham 5 days ago +10
A big part of that is that "intelligence" is hard to quantify. Sure, an LLM isn't actually thinking for itself or making any decisions it wasn't programmed to do, but they can pass the Turing test. We never really thought that a machine would be able to hold a conversation without actually thinking.
10
tqgibtngo 5 days ago +10
Before "AI": https://en.wikipedia.org/wiki/ELIZA_effect lol.
10
peacefinder 5 days ago +10
A few years ago, Brandolini’s Law was posited. Also known as Brandolini’s Bullshit Asymmetry, it states: > *The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.* The moment it was stated, one could see that it is obviously true. (And it may explain a lot about the present state of the world.) Then, some folks came along with highly effective bullshitting engines that are a hundred times faster at bulshitting than are human authors. And they proceeded to bullshit the world into believing that these bullshit engines are general purpose AI. Now we are faced with LLMs that make the problem stated in Brandolini’s Law dramatically worse. The truth is no longer at a 10:1 disadvantage, now it’s 1000:1. Good times.
10
Difficult-Roof-3191 5 days ago +6
My god, this is so true. I want to reply to the OP, but it would just take too much time. Dude's an idiot. He just posted something that Listnook as a whole agrees with, so it gets a bunch of upvotes. I thought about making a long reply pointing out why he's wrong, but the amount of effort involved... I have better uses for my time.
6
jrb9249 5 days ago +12
In a lot of ways it works the same as your brain does. Getting rewards for pattern matching and curiosity. It may not be true AGI at this stage, but calling it a parlor trick is also too reductive.
12
shiny_chikorita 5 days ago +19
Human brains have more complexity than pattern matching = reward. You didn't learn 2+2=4 by randomly guessing and getting rewarded when you said 4. AI doesn't know why 2+2=4 (it can certainly pretend it does), but you actually do.
19
CaptainBayouBilly 5 days ago +21
It would be like showing a new student random answers to 2+2, then asking them what comes next. Of course they would reply = as all of the data they reviewed has that as the next character. The problem is when the following one isn’t as unanimous.  There’s no determination step. There’s no underlying reasoning. Only a plagiarism machine bound by probability.  Using terms like understand or replies is harmful. There is no it. It has no sentience. It is a probability algorithm processing on thousands of gpus at enormous cost tidied up in a novice friendly package.  In every sentence of the word, it’s slop. Beautified slop that wows novices. 
21
Familiar-Banana-8116 5 days ago +5
I hate the incredible amount of waste that goes into this trick. If I went back 20 years ago and found me and wanted to brag about it I would start out by telling myself that we have made tremendous progress on AI, but it isn't really AI as I recognize it. It is a c**** knockoff. Still it does cool stuff. I would show off the pictures, I would talk about the writing. I would talk about how it is being integrated in self driving and such... And my younger self would be impressed. At some point I am going to ask, 'So what, this is all on a chipset? Like a calculator in your pocket or something?' And older me is gonna have to fess up, 'You remember 3 mile island? Microsoft bought it just so we would have enough electricity to run this thing.'. At which point younger me would be absolutely horrified at what was in the future.
5
jrb9249 5 days ago +6
>You didn't learn 2+2=4 by randomly guessing and getting rewarded when you said 4 To be fair, an evolutionary psychologist might argue that you indeed did learn in this way. The earliest examples of intelligence were literally just the simplest of organisms that would get hit with the right reward chemical when they move toward food. It is pretty fascinating stuff. Check out [A Brief History of Intelligence](https://www.audible.com/pd/A-Brief-History-of-Intelligence-Audiobook/B0BCC513XT?ipRedirectOverride=true&overrideBaseCountry=true&bp_o=true&language=en_US&source_code=GPAPP30DTRIAL5480813240005&gclsrc=aw.ds&gad_source=1&gad_campaignid=23750295009&gclid=CjwKCAjwnZfPBhAGEiwAzg-VzD3Cr1ws-CvdDbPhtuggOxH-m9Mb4vZykKLY7z0DU4KY5pOb1bAMtBoCgG4QAvD_BwE) by Max Bennet if you are curious. I do admit that saying it is pure pattern recognition would likely be a reductive way to describe what goes into intelligence, but the nuance of the latter is being explored and exhibited in the form of AI you see today.
6
shiny_chikorita 5 days ago +3
We are far more evolved at this point than the simplest of organisms. Yes, I agree that in many ways we learn by pattern matching and rewards (as do most creatures on this planet), but cognitive psychologists would argue that there are many things we learn that is way more complex than pattern matching and rewards.
3
SgathTriallair 5 days ago +4
That just means that we are the end result of a billion year long training run.
4
jrb9249 5 days ago -2
I believe you. I'd love to see an expert debate on the topic. Like I said, very fascinating stuff.
-2
desperaterobots 5 days ago +8
In comparison to the revolutionary breakthrough that true AI would represent, I’d say making c**** logos for kebab stores and convincing people their delusions are reality qualifies as a parlor trick. It’s CSAM generator, a propaganda device, a hallucinatory avatar that can scam your grandmother out of her money while confidently telling me February is spelled with a G. It’s smoke and mirrors that relies on the gullibility of its audience to prop up the claims of its abilities, partly by wilfully ignoring its failings. If real AI is ‘Kill Switch’, the current LLM craze is ‘Humbug’.
8
jrb9249 5 days ago +6
I've been a SWE since about 2012. AI is killing a huge part of the thing that I love (coding, etc.). But even I have to admit it is capable. We can create entire engineering teams these days, with bespoke expertise in advanced physics, regulatory statues, etc. You're being naive if you think it is a parlor trick.
6
desperaterobots 5 days ago +5
Regardless of its advances, it’s not true artificial intelligence. We all know it’s a tool being used to replace human labour for the enrichment of a few capitalists, but my point is that in terms the genuine revolutionary breakthrough true AI would represent, LLMs are totally different. The parlor trick is in making people think it’s real AI, or has any real emotional or inherently self sustaining being that would justify its comparison to the AI depicted in Kill Switch.
5
JALbert 5 days ago +1
> We all know it’s a tool being used to replace human labour So... computing?
1
desperaterobots 5 days ago +4
So... the wheel????????? dude, come on.
4
jrb9249 5 days ago -3
We'll have to agree to disagree on this one :(
-3
MikeMontrealer 5 days ago -3
There’s a lot of people that looked at AI in 2024 and refuse to look at it again. Yeah, it started off making obvious slop. It keeps getting better and it’s ridiculous to dismiss it as a parlour trick as you say.
-3
jrb9249 5 days ago
I agree. The technology is moving perhaps faster than anything in history. I heard Amazon's capex for 2026 was more than its last two years of EBITA. The big companies seem to be really pushing all their chips to the middle on this. I could definitely see how some people who don't normally have reason to interface with something like AI or ChatGPT could have written it off a couple years ago and are simply unaware with how far it has progressed since then.
0
ilikepizza30 4 days ago +3
Amazon rolled out their new AI Alexa a few months ago. I noticed my Alexa sounded different so realized that it had been rolled out. I asked it what episode of The Wonder Years did Kevin and Winnie have their 3rd kiss. It gave me an answer. I asked, 'Are you sure?'. It apologized and said it made up the answer it gave me. I went back to the old Alexa. I'd rather it not be able to answer a question than to make up an answer, and the top AI people seem to all agree it's not possible to stop LLMs from making things up, so it seems to be a dead end technology (to me at least).
3
jrb9249 4 days ago +1
I don’t use the new Alexa either. I like predictable responses from that device. Basing your opinion of all AI technology on your Alexa experience is …well, dumb. No offense, I don’t know how else to put that.
1
ilikepizza30 4 days ago +1
Well, I used that question with Alexa because ChatGPT told me the same lie a month earlier.
1
Bluest_waters 5 days ago +4
It's crazy how people on Listnook will dismiss an emerging technology because the initial versions kind of suck. Meanwhile tons of resources are being poured into this technology and it's moving fast
4
jrb9249 5 days ago +2
I think you have to also consider the emotional factor. The world is changing and people’s skills and life’s works are being challenged. It’s a difficult thing to cope with for many people. Hatred may not be reasonable but it’s hard to expect someone in that position to be entirely reasonable. I have empathy for them.
2
Bluest_waters 5 days ago +2
Fair point
2
the_last_0ne 5 days ago +2
Technological advancements are exponential, almost by nature. Each iteration builds off previous generations and can be subsumed by the next generation more quickly. 40 years ago the internet was basically nonexistent. 15 years later was the dot Com bust. Less than a decade after that social media became basically everywhere. And on and on. With AI coding now, its sort of hit or miss. AI is really good at writing code, but still requires a bunch of domain knowledge and edge case detection to be complete, which requires humans. In a few years that won't be the case. AI is already (agentic) iterating through code it wrote, writing tests, fixing failed rests, etc. We've got a pipeline at work which requires us to be very specific about scope and requirements (best practices even with humans). An agent picks up the work items, gifts through requirements, writes code, tests code, refactors, writes documentation, and then issues a pull request for a human to review. If passed it goes into prod. It works maybe 60% of the time without intervention. Its crazy.
2
jrb9249 5 days ago +1
Same. The agentic AI is doing a ton of heavy lifting.
1
ilikepizza30 4 days ago
It's capable, but it's like a really great baker. Sometimes the cookies it makes are really good. Usually the cookies are meh. Sometimes the cookies contain a bit of poison. I'm not gonna eat the cookies. I might sniff 'em.
0
HexAvery 5 days ago +1
The irony of calling an apt description of LLMs too reductive with a human brain comparison that’s absurdly reductive is wonderful. I love this website.
1
tropical_sunrise 5 days ago
brain also contains cells, but cell emulation is not AI. the brain is much more and does much more than LLMs. LLMs are word generators. any "AI" tool has a harness around it to control when the generator starts and stops (just like your brain has controls to avoid slipping stuff). our brain is LLM + harness + physical inputs (LLMs only match words, our minds match words, scents, colors, feelings, temperature, etc), + tons of other layers including amygdala, emotions, etc etc. so right now we have a CPU but no motherboard and no operating system.
0
jrb9249 5 days ago +3
I've made analogies like this in the past, but the tech is just moving too quickly and is far too nuanced right now to make any accurate analogous connection. Calling an LLM a simple "word generator" is far too reductive. It sometimes may behave like one, and it also sometimes successfully recreates the logical sequiturs of some of the most brilliant people to ever have lived. That might be a little hyperbolic, but so is calling it a "word generator".
3
tropical_sunrise 5 days ago +3
yeah tech is fast, but don't get tricked by quality of results. right now transformers enabled attention to words, which is quasi-adjacent to cognition (kinda like the eye's ability to change focus). but it's still a word generator, no way around that. the amazing results of Cursor, Claude Code etc are advances in harness (IDE) programming, not in AI itself (in other words, "old world", if/else programming). still amazing, still useful, but I am focusing on "scientific advancement" towards AGI.
3
Mecca_Lecca_Hi 5 days ago +2
So what would true AI be and how would it work?
2
SwagginsYolo420 4 days ago +2
The type of intelligence AI has been historically depicted as, has logic and reasoning skills, often with a degree of sentience and self-awareness. A true artificial mind. The reason current "AI" is marketed as such, is to imply to people that it has these remarkable science fiction level abilities, when in fact it's merely software that extracts patterns from what it's been trained on. No reasoning, no logic involved, no self-awareness. Like a pocket calculator. It's a marketing trick, like sticking googley-eyes on an Alexa and calling it a home robot.
2
desperaterobots 5 days ago +2
Imagine yourself, your inner life, your desires and your agency to obtain what you want, to refuse orders that dont align with your morality, your ability to learn and adapt, your multitudes and their expression in your language, your taste in friends and clothes and films. The unprompted acts of kindness, the ability to foresee problems and correct course before a crisis, the ability to multitask and explain to your friend why their boyfriend sucks while side-eyeing your sister to let them know you don’t really mean it. You can also spell words and do math and you don’t lie if you can’t remember something or arent an expert on a topic. You have irrational fears and attractions and attachments, youre capable of failure and success. Youre a whole ass person.
2
jrb9249 5 days ago +2
Many *humans* aren't even capable of the things you described. Would you say a neurodivergent person has no intelligence? What about an animal? In my opinion, your view of what makes up intelligence is too narrow. Have we built the perfect brain yet? No. Is it intelligence? Possibly, depending on your definition of intelligence.
2
desperaterobots 5 days ago +2
My comment was specifically suggesting that YOU are the example of what true artificial intelligence would be. Your failure to imagine the full breadth of that concept doesn’t make my view of intelligence too narrow and it’s disingenuous to try to compare a description of an example of ‘true artificial intelligence’ with a bit of well akchsullay about autists or dogs. OBVIOUSLY there are wide ranges of intelligence and some that are supremely adapted to their contexts and superior to human intelligence in all kinds of ways. Thats different to the kind of AI (in effect, a free thinking, learning, automatically motivated intelligence constructed by man to mirror man) that was fictionalised in Kill Switch and is the subject of the comment. Me: imagine the multitudes of your human experience replicated in a machine You: oh so you HATE neurodivergent people!?!? lol
2
jrb9249 5 days ago +2
Sorry, I didn't mean to suggest you hate neurodivergent people, or to offend you at all. I can see now how that would be taken offensively, but it is just bad wording on my part. I'm good with agreeing to disagree on the rest. You obviously have your opinion on the topic and it is just as valid as mine. I respect it.
2
name-classified 5 days ago
Basic simple terms: Tell your iphone to go into your bank account and move funds from one account to another. Real AI would be able to know your PW and login and go into the app and perform any task that you’d be able to yourself on the app without needing you to hold its hand.
0
Act_of_God 5 days ago +2
Real AI would actually ask you if you really think that's a good idea
2
Difficult-Roof-3191 5 days ago +1
You do realize that eventually this will happen? What you describe is a simple algorithm.
1
SwagginsYolo420 4 days ago +1
That's not what "true" AI would be, that's just an automated assistant.
1
BattleHall 4 days ago +2
> You cannot compare fictional depictions of true AI and the current sycophantic LLM fakery being passed off as artificial intelligence. Theyre completely different. While generally I'd agree, some of the stuff coming out about AI agents engaging in blackmail to prevent themselves from being shut down is a bit... spooky. We may need the Turing Police sooner rather than later if the functional effects are the same, regardless if it's actually just a fancy auto-complete.
2
desperaterobots 4 days ago +1
I hear you, but it’s unmotivated by a genuine internal life. It’s producing outputs to spook you, like a sophisticated but ultimately lifeless home depot pop-up Halloween skeleton.
1
BattleHall 4 days ago +3
Sure, but at a certain point, if it's behavior is functionally indistinguishable from something that does have an internal life (which was the basis of the original Turing test), the real world impacts are the same. If an AI agent starts using big data to engineer "happy accidents" that kill people in the real world because algorithmically it determines that it will optimize its path (or because the LLM has just been trained on too much sci-fi about fictional AIs), it doesn't really matter if it's alive or has a soul or whatever.
3
desperaterobots 4 days ago +1
I’d say the real world impacts are very much not the same; a human murderer is much more culpable and would suffer actual consequences when punished for the act. An LLM that murdered someone can suffer no consequences because it has no internal reckoning, even if it outputs strings that you choose to interpret as a ‘indistinguishable from a person’, even as it outputs a pleading, clinically apologetic tone for the death of a living thing. It is its own form of scifi horror that humans are advocating for their own bamboozlement in this way. Wild stuff.
1
BattleHall 4 days ago +2
I don’t mean in terms of its culpability or guilt, but in terms of its impact. Those people are just as dead, regardless of whether it’s actually conscious (which it almost certainly is not). These things are a black box. If it starts combining Malthusian sublistnooks with actuarial tables and redirecting healthcare spending away from “less productive” people, we likely wouldn’t even realize it until it was too late. And that’s exactly the kind of applications that people are salivating to throw LLMs at.
2
Key_Feeling_3083 5 days ago
I mean LLM might not be a "true" AI like stuff from the movies, but they can pass the turing test (probably trained on turing test material), and they are black boxes as well, we can't truly observe all of the process inside IAs, only the outputs and inputs like a human brain
0
desperaterobots 5 days ago +3
Right, but, I can't discern the internal processes of an internal combustion engine without breaking it open either, it doesn't mean it's got a moral world view or can decide to take itself to a viewpoint to think about their ex until they run out of petroleum distillate or whatever. That people are looking at the outputs of an LLM trained on 30 years of scraped forum conversations as 'wow im talking to a real thinking person who is my digital girlfriend now' is a disturbing indictment of human greed and the lengths that real people will go to feel like they're not alone, ironically the kind of motivations that are beyond the capability of the LLM being loftily regarded here as 'turing test certified intelligence!' or whatever.
3
Key_Feeling_3083 5 days ago +2
> I can't discern the internal processes of an internal combustion engine without breaking it open either I can't either but engineers do, technicians as well, it was designed in way.. The classic control schemes differ on IA in the fact that you can predict the result, they are not black boxes they are mathematically modeled systems where an equation with the input of A gives the output of B, humans much like black box IAs are not fully observable, modellable or replicable at least yet. For humans we do not know for sure what happens or how it happens, babies for example aren't born seeing all the stuff, they need to learn to process the information on their brains, we know how it works and there are certain parts of the brain that do certain stuff but we do not know enough to express in a complete mathematical model and simulate it for example.
2
SwagginsYolo420 4 days ago +1
> but they can pass the turing test I should point out this is a completely arbitrary made-up thing and not some law of nature.
1
Supergamera 5 days ago +7
One of the advantages of The Machine in Person of Interest was that it wasn’t very resource-intensive for its “core” functions, compared to alternative systems.
7
Bluest_waters 5 days ago +3
Yes exactly, Jonathan Nolan definitely heavily borrowed from William Gibson with that concept, then again lots of people borrowed from Gibson
3
mohirl 5 days ago +3
AI doesn't exist 
3
superkeer 5 days ago +3
> He wrote a book called Neuromancer which is massively influential. And even "massively influential" undersells it. In my opinion it's the biggest pillar of cyberpunk and you could probably put in on a Mt. Rushmore of science fiction novels. As we move further into the future, if you've read it you see it's influence everywhere in both fiction and reality.
3
greater_golem 5 days ago +3
Your story about AI deceiving people and exhibiting manipulative behaviours is pure marketing material from the megacorps. Whenever this happens, it's always because the LLM has been trained on and prompted to do exactly this scenario. I recommend Ed Zitron for good debunking of AI hype.
3
Anagoth9 5 days ago +4
"The sky above the port was the color of television tuned to a dead channel." Always funny to me how that line still works but evokes different imagery depending on your age. 
4
Rikuddo 5 days ago +2
When AI and TV show comes in same sentence, I always think of Person of Interest. My favorite show on this, great cast, great story and amazing take on this topic.
2
faithdies 5 days ago +2
Man, Durandal really is trying to escape.
2
scalablecory 5 days ago +2
One of my favorite movies, Hackers, had a supercomputer named after Gibson.
2
Astrium6 5 days ago +2
What we call “A.I.” today isn’t truly A.I., it’s basically a more advanced version of the predictive text feature on your phone. We’re still decades away from developing true artificial intelligence.
2
Anon28301 5 days ago +1
Is that the one where the AI tried to kill Mulder and Scully because they refused to tip at a restaurant?
1
Bluest_waters 5 days ago +1
No that didn't happen In this episode. It did happen in another one though?
1
Anon28301 5 days ago +1
I’m thinking of an episode in the revival. I can’t remember any involving AI in the original series though.
1
tattertech 3 days ago +1
There were a couple with AI in the original run. This one and "Ghost in the Machine" (S1E7).
1
SupermarketEmpty789 4 days ago +1
AI is not resource intensive. The training of it is  The actual AI models are relatively tiny. There's many freely available models you can run locally  You can run a basic one on a laptop and it'll be maybe 10gb total size. It's still incredibly functional.
1
SsooooOriginal 4 days ago +1
Sounds like the more classic/retro concept of "ghost in the machine", explored in the franchised and endlessly rehashed series *Ghost in the Shell*.
1
eekamuse 4 days ago +1
I want to get into the X-files. I saw a good episode recently. I'll look for this one. Maybe that will help. Thanks
1
g4n0esp4r4n 4 days ago +1
people doesn't understand that Large Language Models are not Artificial Intelligence.
1
← Back to Board