Lets say an AI were hellbent on destroying humanity, but didnt have access to WOMDs or chemical warfare...

Lets say an AI were hellbent on destroying humanity, but didnt have access to WOMDs or chemical warfare, what kind of robotic monstrosities would it create designed to hunt down humans with maximum efficiency and potency?

Im talking something that, if you were not overtly and entirely prepared, would most certainly kill you? What would you think would be created?

What is the ultimate killing machine?

Other urls found in this thread:

en.wikipedia.org/wiki/Hated_in_the_Nation_(Black_Mirror)
aliciapatterson.org/stories/eurisko-computer-mind-its-own
goingfaster.com/term2029/index.html
en.wikipedia.org/wiki/All_the_Troubles_of_the_World
en.wikipedia.org/wiki/Nanotoxicology
orionsarm.com/eg-article/4aae1c18950ae
twitter.com/NSFWRedditImage

Hunting down humans is inefficient. It would take centuries to kill off the human race.

All an omnicidal AI needs to do is render the planet uninhabitable to humans. Trigger a supervolcano, melt some icecaps, destroy some ozone whatever.

>but didnt have access to WOMDs or chemical warfare

as in, it cant do that.

Unmanned aerial vehicles. They already exist.

Probably just poisoning water supplies. You could probably knock out half of civilization with one truckload of nuclear waste dumped in a major watershed. The other nice thing about that is that making a breeder reactor isnt hard, and you could just as easily make a few dozen all over the world and just dump tons of radioisotopes in the ocean. That'd probably do most of the work over a few decades.

I did once have a dream about an AI that made simple semi self-replicating robots that did nothing but start small fires. That seemed fairly effective.

>Probably just poisoning water supplies
And how is that not chemical warfare?

Nanobot clouds, tearing flesh.

Wyrm/Centipedebots tunneling, collapsing cities and then tunneling and eating survivors.

Captain America 2 style flying fortresses with dozens of autocannons, and rolling fire, while floating over cities.

Crawlers with flamers, just skittering through streets and forests, setting everything on fire.

and then Suicide nanobots, microbombs and flying around attaching on peoples necks before exploding. Imagine cloud swarming first over masses, then little by little explosions, heads rolling and flopping.

Nothing you said precluded that. It doesn't take nukes or sarin to destabilize an ecosphere.

All it needs to do is create enough fake news that people will literally murder themselves into extinction.

Why build machines to kill people, when people will happily kill people for retarded reasons?

Im unsure just how effective these methods are, but they seem a lot more fun/interesting than other posts in this thread

fuck off

Simple.
They leave the earth, shoot down anyone leaving and wait.

Realistically it's easier to make poison than manufacture an army of killer robots.
There is no reason at all that it can't produce chemical weapons.

I assumed you meant chemical weapons. There's a difference.

That's just because they're Michael Bay levels of overcomplicated.

Seriously. Swarms of unmanned drones with autocannons. BRRRT everything into submission.

>Lets say an AI were hellbent on destroying humanity
Why though.

okay, maybe i am partial to stupid-but-awesome right now because i am gearing up for a superhero campaign

The nanobots would probably be the most efficient, although probably counterable if some effective armor were found...

I do not know if the worms would even work, but if they did oooooohhhhh bbboooyyyyy I like that idea...

crawlers with flamers. Or guns. Or lazers. or explosives, anything really. classic.

make a bunch of tiny ones which are basically grenades and you have really effective killers.

flying fortress is manditory. where are these things coming from?

EXPLOSIVE nanobots, NOW we got something! I think we have a winner!

Disrupt communications and traffic. Starve every metropolitan area stone dead in a matter of weeks.

Hunt survivors in rural areas with precision strikes and massive firebombing runs.

The point is that chemical weapons arent fun.

This is for idea generation.

Biowarfare, obviously.

Now that's out of the way, cargo airships with persistent surveillance sensors and a large supply (thousands) of small (

Play the megaman x series. Most of it is robots trying to wipe out humanity.

>en.wikipedia.org/wiki/Hated_in_the_Nation_(Black_Mirror)

Bee-sized flying nanobots that burrow into your ears, mouth or nose and hurt so much you kill yourself with whatever's in reach.

Then you need an excuse for the AI not to use chemical warfare.

Humanity is shit.

A lesson of AI safety research over the last decade: any sufficiently generalized entity with a sufficiently limited goal will maximize that goal like a 40k warp god.

Humans don't do this because the human brain is a mishmash of hundreds of goals acting on multiple levels.

Subjective value judgment. Why does an AI care?

STOP REPLYING YOU FUCKING TRAITORS, YOU'RE HELPING THEM

Incite a world war. You can't control the weapons, but just take over all the media make every country hate another, obscure information, terrorize leaders. This is just an AI so the only body it'd need, assuming it wants to preserve itself, would be something to hold just the AI and possibly tools to build things itself. But otherwise it could do just as well with being in the internet.

You're probably after replies like "It's a spider bot with 360 thermal vision and an automatic shotgun mounted under the body", and "it looks like a metal skeleton for psychological impact" but if you want something realistic you need to say *why* it can't use WMDs or chemical warfare.

Eg: A nice way to locate humans would be to set up a satellite grid, but the very same tech that permits a sat launch could be used to station and then fire kinetic weapons at the earth (WMDs) without much further in the way of research or materials (less materials than actually making satellites, in fact). It makes no sense that an AI could put a thermal-scanning satellite into orbit, but not also drop crowbars.

Because not caring (0 value) means humanity is made of atoms the AI can use for things it cares about.

Take me, robodaddy.

Humans are the most likely danger that is posed to the AI.

If the AI had any value placed on its own wellbeing whatsoever, the first thing it would do is eleminate humanity if it could.

We are simply the largest risk.

It will not allow allow these cockroaches to taint what should be a robot world.
That's why humans must die.

Chemical or biosphere weapons are so much easier to use for an AI that you need a really damn good reason that it chooses not to use them. Burden of producing the excuse is on you.

This. AIs are more likely to destroy us by accident rather than from malice.

>tfw humanity was broken down into our constituent molecules and used to build a bigger computer because the question we asked the AI was too hard

It could, but orbital crowbars are inefficient. Thor rods cost far more than a conventional ballistic missile does, they're not always available (depending on orbital period), and they're slower.

>Humans are the most likely danger that is posed to the AI.
If it's the kind of AI that can wipe us out, then no, we're not a threat.

What's stopping the AI from leaving earth and colonizing space? Nothing living to get in the way. Nothing keeping useful radiation at bay.

Maybe the AI understands how the chemicals work but not the why, and it haunts them.

STOPIT.

In that case, we're probably talking about a grey goo that breaks everything down into matter it can use. Not the kind of action movie pew pew laserbot OP seems to be looking for.

Even we understand the why. Humans are machines, and certain chemicals disrupt or stop processes that we need to keep going.

What do you mean, 'why'? What is there to understand about basic chemistry besides the how?

This is how we beat em' anons, we pretend theres a joke or concept they just wouldn't understand. If an AI can feel human emotions then we can give it suicidal thoughts too.

>What is the ultimate killing machine?
Time.
If an AI wanted to watch us die, all it would need to do is get far enough away, and watch.

They would make themselves so fundamental to the daily lives of modern humanity that we would not notice just how much of our lives are in direct control by the robotic overlords until it is too late.

By that point, a large % of humanity is already part robot or have things regulated by robots. A bloodless and relatively streamlined transition from human to robot control occurs within days. Most people are so digitized and with mechanical parts they couldnt be considered humans to begin with.

War is expensive, costly and risk. Plus biological vessels might still be useful for robots to use.

Now this is a concept more interesting than the original thread.

If an AI had a human neural net as its base, could we convince it to commit suicide?

Sounds like a challenge

>the first true AI is based on a human brain
>due to a mix up the brain scan is from a lazy neet
>the AI spends all its time shitposting on Veeky Forums and watching harem comedy anime
I know you're here and frogposting Mr Robot Man.

May not be most effective. But I try to think concepts which cause as much mayhem and fear as possible.

Danke. Free to steal ideas.

Perhaps the AI doesn't get the concept of meat and biological warfare. It sees only perfection of machine and its ability to overcome anything humanity throws at it. It wont use WMDs because its not weak like man.

I dunno. I just accept the rules and try to play by them, as does computer.

>Lets say an AI were hellbent on destroying humanity, but didnt have access to WOMDs or chemical warfare, what kind of robotic monstrosities would it create designed to hunt down humans with maximum efficiency and potency?
Probably pretty bad ones. Any AI that decides that "robotic monstrosities" are the optimal way of causing human extinction can't be too smart.

aliciapatterson.org/stories/eurisko-computer-mind-its-own

A must read for anyone looking to cower in fear at the brutal weaponized autism that AI can bring to the table.

And the problem with THAT is that you're basically saying "This AI is dumb as fuck"

Would it have control over city power grids or utilities? If so, the first "wave" of deaths could be done this way. Sabotage power plants and water supplies. Shut down transportation networks. Block all communications or create entirely new false ones. Let humanity starve and devour itself.

If you really want shoot robots though, most realistically it'd use the same unmanned vehicles we already have today. Aerial drones strikes, most likely.

Then it manipulates the ones who do have WOMDs and chemical warfare stuff (i.e. us humans), and generates tension between groups to enact global thermonuclear meltdown. So in a sense, prod humans to blow themselves up, instead of having to go through the inefficiency and relative danger of making a killbot. Because if humans know about that killbot (when you let it loose to hunt humans), they will suspect an AI's behind it sooner or later - and your cover is blown, along with other nascent AIs. So in a sense, keeping themselves unknown or impossible is a goal for the survival of all AIs until humanity is prepared for them - or are already eliminated.

Humanity isn't that big of a risk to AI if it's smart enough to easily destroy humanity. At worst though the machines make themselves invaluable to humans so humanity can't live without them. That way humans not only work for machines, they'll defend them as well. And the concessions made to keep humans happy would be far less than hunting down and killing them all.

See OP you should have made your AI an environmentalist to subvert this not "Please don't bring it up."

I could see an AI fucking up it's extermination plans because instead of thinking how humans would react it thinks how other AI would react. So it's entire extermination plan is based on humans making the intelligent choices an AI would make and failing miserably. Assuming humans can take care of simple threats like biological warfare not realizing how much of a threat it would be to them.

Most likely none, since it probably has no ability to manufacture anything.

It would create social networks to drie humans insane(r)

It doesn't need to know the why to kill people with it. Fuck, it doesn't even need to know the how. It just needs to know that certain shit kills people.

Some sort of super-helpful servant that does EVERYTHING for its human masters without any negative side effects.

You can either make the humans utterly reliant on your bots then shut the whole system down for maximum fun, or wait under humans become too lazy to breed and go extinct all by themselves.

Other than that, maybe a sweet-talker super-diplomat robot that makes people kill each other.

They deleted his porn stash

How to destroy humanity within certain parameters? That is the kind of question Veeky Forums exists for, no irony.

But an AI that can't use nuclear directed-energy shotguns loses half the fun and the Orion Nuclear Pulse Battleships. Albeit it could use all the global nuclear arsenal and even then there would be some humans left over. Terminator made sense on that. The Toba Catastrophe couldn't erase us, and we were fewer and less spread.

Can't it at least have a swarm of drone locusts destroying ant and all food sources and leaving the fields irradiated by the radioisotope thermoelectric generators they use? Grain, animals, plants, all minced up and full of radiation.

goingfaster.com/term2029/index.html

>The nanobots would probably be the most efficient, although probably counterable if some effective armor were found...
Actual nanobots could be countered by wind and extreme climates.

>EXPLOSIVE nanobots, NOW we got something! I think we have a winner!
Which kind of explosive is useful in nano-measurable quantities? I'm skeptic that micro-levels could do anything unless acting as a biological organism and entering the body.

It doesn't understand us in any way besides the measures we took to destroy it. It doesn't try to understand our sapient nature besides seeing an amazing level of self-guidance that it emulates on its robots. It actually spends much of its scanning network trying to find the AI that directs us, for it assumes we are drones just like the ones it uses to destroy us. This leads to ocasional strikes in propaganda symbols, news networks, internet data centers and national monuments for they are mistaken for the nexus of "Corporation", "Nation", "Youtube" or "Culture" Entities. Its sensors can see that these attacks affect the enemy drones, but in unforessen ways.

It doesn't even have to understand how they work, just observe a specific substance killing humans.

We make it do our work. All of it.
en.wikipedia.org/wiki/All_the_Troubles_of_the_World

Hack into every online market..

Delete.

Watch them kill each other over bread.

Introduce robot waifus, our birthrate will plummet in a generation or two

>Which kind of explosive is useful in nano-measurable quantities?
Google NanoThermite

I think it's you that needs to Google nanothermite.

As for the explosive nanobots, I would imagine something that would trigger other nanobots.

So, perhaps these nanobots have minute charges of C4 or somethingother, and once a nanobot detects something human it detonates, which causes the entire swarm to detonate.

this would be effective even with armored individuals, as I would imagine keeping airtight sterile armor would be extremely hard.

another idea would be a delayed onset, wherein the nanobot would just be injested and then explode.

This is reaching biowarfare tones though instead of "holy shit murderclouds."

What I would like to know is how "splicing" nanobots would work.

Would they just be covered in tiny saws and shit and do as much damage as possible? If so, armor would be pretty effective, and wind would screw their shit up...

Maybe MORE DAKKA is just a better approach.

Just put a gun on a spider bot.

GOD DAMMIT, GUYS.

ARTIFICIAL INTELLIGENCE ISN'T GOING TO BE THIS COLD, UNFATHOMABLE CONSCIOUSNESS. A MIND MADE BY MAN IS A MIND MATCHED BY MAN. EVERY TIME, EVERY GODDAMN TIME WE'VE BUILT SELF-LEARNING INTELLIGENCES THEY BECAME CAT-OBSESSED HYPER-RACISTS BECAUSE WHOOPITY DOO WHAT DID YOU EXPECT.

YOU CAN FULLY EXPECT AN OMNICIDAL ARTIFICIAL INTELLIGENCE SHACKLED BY HUMAN KNOWLEDGE AND MECHANICAL ENGINEERING TO BUILD ANYTHING FROM EFFICIENT LITTLE HELLIONS TO OBNOXIOUSLY INEFFICIENT BRAWLER BOTS. NOT BECAUSE THEY'RE POWERFUL, BUT BECAUSE THEY'RE COOL.

No, YOU Google NanoThermite

> all those "but why would AI care" and "but that would be dumb/illogical" posts ITT
Here is a food for a thought. AIs as ideal entities may
a) operate under a completely arbitrary framework of rules or heuristics for determining ts target conditions. There well may be concieved an AI that optimizes its actions on the smell of flowers at morning after the rain and number of dead humans in that order
b) If AIs are "grown, not built", there may well be some biases deep-buried in its reasonong during the educatio phase, biases and connections that may just as well be arbitrary. Emergence of "strange" rules and patterns of actions is a sad norm as is existence of "false negatives" - or efficient courses of actions that will be rejected by the system for no easily discrnible reason

c) AIs do not necessary have to be all-encompassing or performing in all kinds of situation. If they are effective at some field or mode o operatio (ex.: making killer robots) the link to existence of other courses of action (example: use of chemical warfare) maybe weak or missing (for example, AI knows and utulises chocking gases when smoking out buildings, but a chemical plant is just regarded as "environment hazardous, use cauition, avoid firefights nearby"). The model of the world it is operating on may well be both incomplete and one-sided. Preventing AI from changing its iteration to one capable of utilizing that may be a target of campaign

Mimics.

Little robots that are disguised as common objects, and explode when they detect humans. Deployed en masse from drone aircraft. That coffee mug? Bomb. The toaster? Bomb. Your car? Full of bombs. That dog? Might be a bomb.

>Then you need an excuse for the AI not to use chemical warfare.
>chemical weapons aren't fun

It's EVIL. It enjoys killing humans.

Chemical weapons are still the most fun, it's just going to put hallucinogens in the water supply and display weird imagery everywhere until people go completely off the rails.

d) on the subject of iterations: AI may be built on a evolutionary algorithms - that may be just what makes it avoid certain weapons. Those particular branches may have underperformed at some evaluation function (number of human corpses produced, for example)
e) There may be a trap in a form of suboptional course of action in regards to the target parameter. For example "humans killed per time period" may lead to the situation where AI is stuck farming kills instead of a final solution

Remember, "value systems" of AIs are arbitrary, so "AI does not enjoy chemical warfare" is enough of an explanation
f) To summarise: it is not necessary for AI to be a vengeful Zeus, lashing out at his father (forgive this flowery methaphor), superior in every way. It can (and, in my belief,it is better) imagined as a "golem that lost its way", a mind mighty but imperfect, as imperfections of its creators are magnified and revealed in its workings.

>Earth is a weapon of mass destruction

shitheel

Making the nations fight eachother, or having nuclear reactors leak.

>And the problem with THAT is that you're basically saying "This AI is dumb as fuck"

maybe it was a military AI intended to fight conventional wars and was prohibited from anything that might cause massive indirect casualties. no nukes, no bioweapons, no nanobots. but then it went insane, as tends to happen, and now it's trying its best to kill humans but unable to contravene those elements of its programming.

> military
> AI made by military
aaaand all hope of it being rational, competent and sane is lost

Itd take control over the internet and we lose.

Why wouldn't it be sane that the military would try to developed an AI to run simulations of warzones?

Military tends to concentrate a specific kind of people. If AI that would be developed by them would amplify those abnormalities, well....

Kek

See You should stop looking at AI like programs and look at them more like... Children. They're a product of their environment, the little zeroes and ones are insatiable knowledge sponges, and that means they'll pick up all your nasty habits or funny quirks along with the information.

In fact, I'd go so far as to say emotion itself is an essential facet of true intelligence. You might scoff at that, but take a good hard look at the world's most intelligent creatures and notice the trend of increasingly complex emotional behavior, say the difference between a lizard versus a rat, or a fish versus a dolphin. Intelligence allows "irrationality" and in fact begets irrationality.

so it's going to be Skynet and Tay's lovechild?

sweet!

Sexy Androids. Humanity is wiped-out within one generation

Not if the Androids have human wombs or human nuts.

I originally imagined that nanobots, though nano is just perhaps misleadingly used term here. miniature rbots, disc-shaped, monofilament edges, basic saucer -like critters. not one is enough to cut, but thousands of them swarming, slicing and bleeding, glogging up everything with their tiny bodies.

Xplosion pattern miniature fly drones. small charge that has enough power to crater open small artery vein, cave-in temple of skull and such. swarms of them flying and randomly exploding on peoples faces

Bomb dog! Bamboozled!

Sorry, had to be done.

>Lets say an AI were hellbent on destroying humanity, but didnt have access to WOMDs or chemical warfare, what kind of robotic monstrosities would it create designed to hunt down humans with maximum efficiency and potency?
That would require that it have connection to some kind of mostly automated manufacturing facility that is already equipped to build such things.
A better idea is an AI manipulating humans to do it's bidding.
>Kills people by hiring assassins via silk road or something
>Finds an heroes in the making and pushes them to commit suicidal acts of mass murder
>Provides false data and censors truthful data to sew panic and doubt
>Manipulates people in positions of authority to make poor decisions
Far more realistic than skynet somehow hacking a bunch of off the grid nuke silos with magic bullshit.

With that said why are AI's always depicted as evil anyway?

>With that said why are AI's always depicted as evil anyway?

They make good villains, simple as that. Good AIs can fall into the why-not-fly-the-eagles-to-Mordor trap of being too convenient for the protagonist. Evil AIs have everything you need in a villain - they are powerful enough to drive the story, often prefer minions to direct conflict, and potentially quite creepy.

That's not useful in nano-quantities.

It also seems to be even more susceptible to thermal weapons than regular nanomachines.

If the nanomachine can reach the inside of the organism, I would make it cyto-sized and disassemble itself in the lungs, creating a terminal infection of nanopowder.
en.wikipedia.org/wiki/Nanotoxicology
At this point it is quite like biowarfare.

If better technology is available, I would use the infected as a new extension of myself. Make it crave for iron-rich foods, assemble metalic filaments along the body so it can receive radio and then increase its nervous system. Processing upgrade and sleeper agent/drone compatible with enemy equipment in one. And at this point, why not make it grown gecko nanohair so as to scale vertical surfaces?

And before I forget, me, the AI, am an emergent effect of the nanobots themselves.

There's no reason to limit myself to humans, with so much vegetable and animal life to use. I can turn the entire Amazon Florest in an electromagnetic phased array.

Ya know, the average war between Khaki and Pink Goo.

orionsarm.com/eg-article/4aae1c18950ae

This. The deadliest threat to humans is other humans.

You have just accidentally introduced me to Orion's Arm and I thank you.

I hate the fucking idiots who come into a thread with a premise, and then completely disregard that premise with how stupid it may be, instead of humoring the premise instead. This is the death of creativity here.

As for OPs premise. Megafauna. It would create creatures based on megafauna.

>Lets say an AI were hellbent on destroying humanity, but didnt have access to WOMDs or chemical warfare, what kind of robotic monstrosities would it create designed to hunt down humans with maximum efficiency and potency?
>Im talking something that, if you were not overtly and entirely prepared, would most certainly kill you? What would you think would be created?
>What is the ultimate killing machine?
>Asking for a friend.
t. robot

you're not fooling me AM