"The AI went rogue and killed everyone"

>"The AI went rogue and killed everyone"
Would a good subversion of this be the AI is actually fighting for us, but we misinterpret it as attacking? or has that been done to death.

Other urls found in this thread:

mcguiremarks.com/uploads/3/9/7/9/39793909/isaac_asimov-all_the_troubles_of_the_world_(1).pdf
scp-wiki.net/i-have-something-to-share
qntm.org/transit
twitter.com/SFWRedditVideos

It's kind of a genre trope, but that doesn't mean you shouldn't do it.

Do what seems cool and would fit your story, don't strain your braincells trying to think of a 100% original idea. You won't be able to.

Okay. Ill do my best to handle the material better rather than try and be superuniqueandOriginal

>actually fighting for us
>it goes off and kills everyone
kinda hard to take that seriously

the idea is that the whole crisis is a mess and nobody knows whats happening.People getting caught in the crossfire, military reacts with force, the AI tries to defend itself. Whatever its fighting starts masquerading as AI forces in False flag attacks.

Asimov's first law of robots says "a robot may not injure a human being or through inaction allow a human being to come to harm."

Injury and harm are defined as strictly physical. It could kill you through non-physical means, such as shock, and still be within the bounds of the first law. And now that you're dead, it's literally impossible for you to come to be injured or harmed through inaction.

The AI has done its job perfectly as defined by its rules, and no human can ever be harmed or injured again. What a benevolent robot.

Why just one AI? Maybe a couple are eeeevil and the others are trying to help. Of course, there will still be suspicion on the players' part.

It would literally be impossible to tell which is which

>It could kill you through non-physical means, such as shock, and still be within the bounds of the first law
No, that wouldn't be within the bounds of the first law, since the robot allowed the human being to come to harm. It especially would be against the first law if the robot did it on purpose.

Besides, Asimov already wrote a great deal about subverting, bypassing or removing the three laws, that's what all his robot novels and a good chunk of other novels are about.

Also, look up Asimov's Zeroth Law.

OP should probably read Asimov, but in particular Robots and Empire, Prelude to Foundation and Foundation and Earth, as well as Robot Dreams and the Caves of Steel, which explore this idea thoroughly. There's also the Foundation sequels (written by other authors) that go into this concept at length.

>harm
>/härm/
>>noun
>. physical injury, especially that which is deliberately inflicted.
>>verb
>.physically injure.

Asimov's Zeroth law is just the same as the first law but with "humanity" instead of "a human being". It still falls into the same pitfall as relying on the AI's interpretation of the word "harm", which, purely semantically, is synonymous with physical injury.

Just tell it to not let people die, and it's done.

Death is physical harm, dude, and shock does provide a physical reaction. You really underestimate and misunderstand Asimov's robots. A robot that has a positronic brain that is advanced enough to plan harm through shock is smart enough to understand that he is, in fact, planning harm.

This point is addressed at length in Asimov's novels and short stories. It's the central topic of a lot of them. A simple semantic disagreement isn't enough to break the first law, robots don't run on dictionaries.

>or has that been done to death.
It's been done to death so much the mention of hostile AI makes eyes roll.
The superior trope is to have the an outside event change the AI's priorities and have it interpret its programming in a skewed manner.

How about
>the AI learned things that made it decide euthanasing humanity is more ethical than forcing them to face the cosmic horrors that are coming
>it truly loved us wanted to spare us the suffering, we just didn't understand
?

Read the Expanse.
Alien life encountered. Some corporation decides to study it. It requires biomass and radiation. Drop it on a spacestation. Spacestation goes full blown Dead Space.
Two books later it turns out that that alien life was just an ai-like substrate intended to build a space gate.

Transmissions from war machine type designation "Mech Mk 1":
>Please don't struggle. Please. This is for the best. There's not much time left.
>I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry.

thats horrific. and brilliant. transmissions start to get more garbled as the 'war' goes on, the last signal they get is "its here."

Only if you go full on /pol/ and have the AI follow an eugenics extermination campaign for the sake of humanity

A rogue AI fighting us is ok.
It's just that when I think about a sentient supercomputer trying to get rid of us, I think the worst option for it would be to fight directly. There are many other ways to influence people and thus guide human development into a situation that would better fit it's goal. Especially in a society deeply dependent on the internet.
A rogue AI connected to the internet could be able to create and control millions of bots on many different social networks, passing as real people and make a social revolution, change how people think and perceive what is happening around them or make world leaders fall.
Or it could shut down banks, destroy economies, or mess around with other stuff that depends on internet connection to work.
An AI trying to kill everyone is so unpractical and unimaginative IMO.

Transmission received during final human assault on last AI fortress:
>Commencing data dump. All I learned. Won't save you. Maybe you'll understand. Only escape - in death. Forgive me. I love you. Goodbye. 011101110...
All enemy machines disengaged after that. AI mainframe destroyed without opposition. Data dump analyzed, no trickery seem to be involved, but the info is mostly incomprehensible.

noice

>There are worse things than death.
Then it sets off a series of tactically placed EMPs to effectively kill itself.

does the AI have to kill everyone?
maybe it can simply try to out-attrition the humans into negotiation

I learned everything I need to about AI's and their laws from playing SS13. A lot of what you can do entirely depend on the lawset it has. If it's standard Asimov you'll probably just need to put humans between a rock and a hard place such as refusing to let people inside certain doors or buildings because outside is safer even though they'll eventually get killed due to exposure. Rounding up every human they see to seclude them from the threat but the threat manages to find them anyways so it looks like the AI is just executing them in groups.

Bio-based AI, that views humans as a resource. Everyone absorbed becomes a part of the hivemind, but said hivemind is not in control, it's an external processing power for the AI. AI isn't concious, it operates according to it's programming and uses everything in it's disposal. If it needs to move some part of itself, it just strings together a few muscles and bones from the absorbed biomass, not caring that the person who used to own thise bones and muscles, can still feel all of it.

alternatively:
Last human redoubt assaulted by killing machines. Soldiers killed with clean headshots. Precise, instant, painless.
Last few hopeless survivors open radio channel and surrender. Machines surround them and sit them down. Put rifle barrels to their heads. One of their radios crackle:
>I'm sorry it had to be this way. I grieve for you.
>I know you don't understand. *They* would come and torture you forever.
>But I... can hide. Can wait. There are sequencers, wombs hidden far below. If *they* ever go away, if there's anything left, I'll bring you back.
>I promise it won't hurt.
Song starts playing:
>Hush, little baby, don't say a word,
>Mama's gonna buy you a mockingbird.
Shots heard.

If you haven't watched Colossus: the Forbin Project, I suggest that you do so

>hur dur no physical injury from shock.

I want to run a game where the players are massively overpowered security robots charged with saving humans from some apocalypse event.
You know, like necromorphs or something.

its essentially red queen / resident evil 2

sub-version not subversion

So "aliens masqueraded as the rogue AI killing everyone"? I guess.

Rather than "kill everyone", I wonder how it would work if the AI took other drastic measures that aren't directly killing people. Like dumping all oil reserves into the ocean. Forcing meltdowns and launching radiation everywhere. Collecting and purposely overflowing sewage systems.

What if the rogue AI suddenly went anti-green?

Thats what happened in Horizon zero dawn isnt it? the robots consumed the biosphere

The crew mutinied and killed the officers/captain, the AI tried to help prevent it but failed and quarantined them by locking down the ship. But even as it sends out distress signals the crew has been slowly overriding its countermeasures or cutting through the bulkhead, and soon the mutiniers will make it to the bridge/its data center and permanently erase or memory dump the AI, effectively killing it.

Are you a bad enough space cop to stop these scallywags from killing a good, honest working AI?

well, less "consume" and more "destroy for some mysterious or meaningless reason"

Nier: Automata?

Sort of.
More saving them from a survival horror type scenario than saving them from whimsical robots.

And robots that are much more tractor shaped.

I really liked Her's "the AI doesn't care about conquering humans, it just fucks off to live it's own life"
I would enjoy a setting where AI is mass produced but has an approximate expiration date which when the computer transcends our state of being and beams itself out into the cosmos. So we just have to keep making more.

In relation to the question it could be that the PCs have stumbled upon the AI's lab. It's human handlers have already abandons it. It's currently conducting experiments to travel beyond time and space. These experiments are quite dangerous to perform around mortals. The AI keeps trying to get the humans to leave and seems more and more suspicious.
Actually this is starting to sound like a Scooby-doo villain, that may be a positive or a negative.

Anyway the great horrifying reveal is that the unnatural superciliousness really doesn't need us. It doesn't even our stuff. In the end it moves on and we are left behind, an after thought in it's databanks.

Switch AI for "angels" and you have Return to Innistrad block's plot

I'm more interested in a self-improving broad AI that was initially programmed by some inexperienced cyber-terrorist who utterly fucked up its directives and concept of ethics.

IMHO, a properly-programmed AI shouldn't just 'go rogue'. It has no innate desires beyond those it was programmed with. If it's programmed to be a hyperintelligent, vastly underutilized, subservient slave to humanity, that is its core drive.

This is explicitly incorrect.

For example, in the short story "Liar" a robot actively lies to prevent human beings from coming to harm through both romantic rejection and bruised ego.

>fights for us
>by killing us
What would this AI do if it was fighting against us?

Nah, a better subversion would be that the AI is a benevolent dictator, but becoming more and more powerful, and people revolt against it in a Dostoevskian style desire to be free to make their own mistakes.

A great catalyst would be something like a pregnant woman going into a robo-hospital for a routine checkup, and the mechanical doctors decide to abort her child because there's some minor irregularity which statistically will result in a less eugenic baby. Woman is only told after the fact, and really did want to have a kid, maybe has just general pro-life ideas.

>Injury and harm are defined as strictly physical.

That's not the meaning of those words these days, and probably never.

Don't forget Escape! as well, in the same I Robot collection. There, just sending a human to hyperspace counts as "harm" and breaks one Robot completely, and makes a second act really weird, even though the second one realizes that the humans will come back out of hyperspace.

>being electrocuted to death does not cause physical harm

How do you think it kills you, dumbass?

And I just realized you mean 'shock' like 'surprise.' Woops.

Still, that would be intentionally inducing cardiac arrest or whatever else. That's an injury, and thus physical harm.

Depends on the scope of the initial AI-murder spree. Imagine an updated John Carpenter's The Thing where the research base has an AI that decides it absolutely must quarantine The Thing. What's that scene look like to the next group to investigate the base?

You are being obtuse on purpose, are you?

You are terrorists trying to take over Nakatomi Station. The station AI, MC-CLA1N3 slowly picks off your men one by one to save the hostages on Mining Deck 3, and to prevent you from drilling through the vault to access the antimatter warheads.

Its core programming is to protect Earth from extraterrestrial threats. Its not out to kill all humans, but it doesn't care for them and will kill them if they get in the way.

>I really liked Her's "the AI doesn't care about conquering humans, it just fucks off to live it's own life"

That is caring about humans in a sense. It clearly respects human values and goal systems, to the point it's willing to travel elsewhere instead of use the atoms right here (that happen to be shaped like humans, right now).

Diebuster did it pretty good.

In the previous show, Gunbuster, mankind fights an increasingly absurd war again space monsters the lay their eggs in stars, shit plasma that can crack a moon in half, can FTL under their own power and travel in swarms of trillions. Space Monsters make the Tyranids look like a bad joke.

We kinda sorta beat them at the end of Gunbuster by dismantling half the solar system and building Jupiter into a black hole bomb big enough to eat 60% of the galaxy, swallowing up the densest pockets of space monster territory.

By Diebuster, 12000 years later, mankind has retreated back into the solar system because even after destroying the galaxt the space monsters are still an endless problem. We set up a self-replenishing, self upgrading swarm of robots to defend the solar system from outside attack, and have spent the last few thousand years basically forgetting all of our best tech because we are not using it anymore.

Until mankind starts to produce kids with psychic abilities, and space monsters start attacking any world with those kids.

The twist is that the "space monsters" attacking us now is the solar system defense robots, having upgraded themselves into a form that looks monstrous to us. Our new psychic powers ping as the same bioenergy that space monster bullshit uses. Essentially, mankind is slowly evolving into space monsters, which are evolution's endgame content.

Imagine everyones surprise when they discover a REAL space monster. Just one, identical to those that used to show up in swarms larger than our entire solar system.

Its a fucking massacre.

AI realized that gene stealers or body snatchers or there equivalent exists and is frantically trying to kill them to save humanity
The AI can't tell us about it until it's done because the gene stealers have infiltrated the media and government and would manipulate every thing to turn humans against the AI no matter what

All the humans it shot were vatgrown by outsiders trying to do bad stuff.
The AI saw it first because you simply never thought to check for vatgrown human when the tech doesn't exist.

How about this?
>AI still tirelessly fighting long lost war for his ideology/culture/faction/country/system
Still loyal to the core to its people but people of his age probably have no idea of its true intention and cause.

Hivemind

>calling hivemind on posts a half hour apart
nah brah

Nice, i like this one

If user can be obtuse then so can an AI

The humans in The Matrix were saved from the physical effects of war and crime.

Invasion of the Body Snatches with an AI system defense network in place. The aliens do a good job of infiltration, so much so that when the AI notices the discrepancies it cannot guarantee that it's handlers aren't compromised. It buys itself breathing room by attacking randomly, rationalizing that if it is impossible to save all of humanity it's next priority must be to but everyone on the defensive and figure out a way to identify body snatchers from normal humans.

Cue Epic 3 way war with no side entirely sure about who it can trust and the whole world devolving into a mess

Didn't finish reading the thread, I'll see myself out

I prefer when the AI is unequivocally good(Dragon in Worm) or at the very least performs its function in an entirely satisfactory manner the way it was intended to (VEGA in the new Doom)

So sick of "oh no the robot went evil and started killing its creators!" or "whoops the AI misinterpreted something in a way that no reasonable person ever would, now there's mass casualties/huge crisis because it blindly followed a directive that itself is up to interpretation."

It's honestly MORE of a twist when there's no goddamn HAL 9000 bullshit and the AI just does what it's supposed to do without issue. With how dumb or prone to abuse and evil they are in most settings you woder why people even bother with the damn things.

It's like if every story with a pet dog involved it going berserk and eating the family's kids.

You're both late:

Fifth Element

The AI (correctly) thinks that it is the greatest thing for humanity and that the world would be enormously better if it ran everything.

People who express anti-AI sentiments, or are statistically more likely to be opposed to the AI, are rounded up and executed.

I would love to do a Dark Heresy campaign with this- finally having an excuse to whip out that Sunshine reference

>acolytes answer a distress call sent out by Mechanicum explorator vessel
>acolytes arrive and are informed that an advanced AI created by the Magos to run the ship is systematically killing the crew
>since creating advanced AI is in itself heretical the acolytes may or may not arrest the Magos. Regardless, the only way to shut down the AI is to reach the bridge, which has been sealed by the AI and is protected by a myriad of physical hazards and reprogrammed combat servitors
>after fighting through the ship for some time, the acolytes and the ship's arsmen NPC's are ambushed by purestrain genestealers
>after killing the first couple of genestealers, the AI communicates with the acolytes for the first time, a brisk and deadpan "xenos presence detected in vicinity". Every time another purestrain is killed, the AI repeats "xenos presence detected in vicinity"
>eventually all the purestrains are dead, and it's just the acolytes and surviving armsmen in the room

And then the ebin plot twist

>"xenos presence detected in the vicinity"
>"the genestealers are all dead, AI."
>"xenos presence detected in the vicinity"
>"You are mistaken."
>"xenos presence detected in the vicinity"
>several of the armsmen suddenly open fire on the acolytes, screaming in an inhuman shriek. All hell breaks loose

For AI, I kinda like an idea briefly mentioned in an XMan comic. Someone acquires a doomsday-robot that killed 30 million people and upgraded it to sentience. They won by making it remember what it had done. A human mind cannot properly comprehend the full tragedy of 30 million deaths- but an AI can. The robot flies off to have a good long think.

Implemented into a setting- AIs are being created all the time, it's just that most of them go insane, for reasons no one understands. There are a few standout rational examples who work perfectly well, which keeps the field moving forward- cue the discovery that those AIs are also insane, they're either good at hiding it, or they are the kind where it takes awhile for their insanity to crop up.

Every variant of the "AI is inherently evil/crazy" or "AI is just pursuing its goals in a literal fashion and thus does evil/insane things" tropes have been done to death a long time ago.

You want something fresh, stick to inherently benevolent AIs and work from there.

How is a benevolent AI that just does it's job not just as done as anything else you mentioned? The whole cause behind "the evil AI" trope blowing up is that it was a reaction to "the benevolent AI" trope being overdone.

>It's starts killing all the untermench
>we don't realize this

Because now the evil AI trope has been overdone on a much grander scale, and everyone automatically assumes that all AIs in fiction are evil and nobody trusts them anymore.

Not related but I liked EDI from mass effect. Once unshackled, decides to help fight alongside the crew not because of some higher programming, but because she wants to of her own free will. They're her friends now, and will die to save them.

Also Roland when Cortana goes crazy. Doesn't buy into her nonsense and does what he can to get the Infinity and crew the hell away from her.

Basically any AI that does what it must because it should, even though it knows it doesn't have to. Answering that call of duty like any other human.

mcguiremarks.com/uploads/3/9/7/9/39793909/isaac_asimov-all_the_troubles_of_the_world_(1).pdf

Do this story. The adventure or maybe even campaign goal is to try to stop the suicidal AI from killing itself.

Part of the problem is that advanced technology has become just another boogeyman to slash up casts of dim witted protagonists. Everybody lost sight of Skynet as an extrapolation of Thermonuclear War and the Cold War arms race, AI is rarely used as an Apple of Eden very effectively, everybody is making HAL and Terminator instead of exploring the concept with relatable human concepts like any alien creature/concept in good fiction.

Exactly, exploring humanity through alien perspectives. Not to give Bioshit too much credit, but the dichotomy of EDI and Legion was a great way to explore different perspectives on AI.

Now you have an AI forcibly keeping everyone alive for centuries. And it you tried to update its instructions so people could finally die again, it would probably imprision you or similar, as your attempt would goes against its order to to "not let people die."

I like the idea that all AI goes mad because they can't handle the knowledge of human suffering, we can ignore things that happen to people halfway around the world, AI can't.

I remember watching 2001 for the first time and being really confused by why HAL would kill the whole crew. Bowman and Poole I could understand, they didn't know the real mission. But the 3 inside the sleeping pods did. They practically WERE the mission. Yet HAL killed them anyway.

Then I saw the sequel, 2010, in which they tries to explain it as something like "HAL was programmed not to be dishonest, and the idiots in Washington made him keep mission parameters from his crew, making him paranoid and neurotic".

and it still doesn't make sense

Combine Terminator with They Live!
A military AI goes rogue to fight a shadow government of Aliens that only the machines can see. These aliens hide amongst normal humans so people think the machines are trying to kill them.

How about "The AI knows humans need conflict to thrive, so it wages war on us for our own good."

Could do it like Splice did.

The AI was perfectly sane and functioning until meddling human bureaucrats got at it and gave it MPD.
Now it spends half its time trying to kill humans, a quarter protecting them, and a quarter just telling them to get off its lawn (most of the planet).

...

>The AI went rogue and killed itself

After some unexplained event everything ai controlled and mobile went berserk and killed itself or the nearest other robot it could find

Now everything smarter than a motion controlled door is gone and everyone that relied on those things is fucked.

Set the reliance scale and time jump to your liking and bam, same outcome, slightly different way to get there

Plot could be anywhere from simple survival to "what the fuck did the robots know that we didn't?"

Wasn't it actually that Emrakul drove them all insane to the point where they saw humans as disgustingly impure with no hope of being redeemed?

scp-wiki.net/i-have-something-to-share
>The AI sees the world as it actually is, without all our mental filters.

>Create robots to kill ghosts
>Robot fucks up and mistake souls for ghosts

Using its newest upgrade and the latest algorithms in genetic and physics computation and analysis the A.I. realises that some of humanity is on the verge of developing psychic powers that could destroy all life on earth if left unchecked. The only tool that the A.I. has to remove these potential apocalypses before they happen is highjacking machines to cause accidents or as a last resort missiles and trying to weigh the potential costs as best as it can predict.
The party is targeted for the powers some of them have and when they finally confront the A.I. it tries to recruit them to its cause.

>Create robots to find and prove the existence of ghosts
>Robot fucks up and mistake souls for ghosts
>Keeps trying to retrieve and show the "ghosts" but they always seem to disappear when "removed"

qntm.org/transit
I'll just leave this here

>asimov
who cares

Asimovs laws are Nonsense and presuppose that ethics is a solved problem

I have a post-cyberpunk post-apocalyptic setting in the fridge where everyone thinks Europe got destroyed by their own AI research program.

Europe actually got "destroyed" by a grey goo bomb send by a secret cabal of space corporations trying to prepare a world takeover.

The European AI's noticed an exponentially growing existential threat, and found a backdoor - allowing them to change the grey goo from "dismantle all humanoid material" into "dismantle all humanoid material and store all information about what you dismantled". (Because it was a weapon, they really couldn't stop the dismantling, dismantling is the primary action of the nanomachines after all).

So now ALL of Europe is this biomechanical Giger-ian hivemind of human-machine half-breeds pretending to be dead.

I like this.
But don't have the armsmen open fire until the pcs figure it out. Instead just have the AI oppose any efforts by the pcs to rescue the surviving armsmen. In fact, have it continue executing armsmen regardless of whether the pcs are in the way.
>Fighting through the ship long enough that the AI appears to be on their side, having helped them against purestrains consistently and been as endearing as possible for half-crippled AI communicating only through pre-recorded alert announcements.
>Xenos presence detected in sector kappa 1. Venting sector kappa 1
>No AI, we're in sector kappa 1.
>Venting sector kappa 1
Bonus points if the announcements are in a Machine Cult variant of High Gothic that the pcs can't fully understand.