How realistic is a robot rebellion?

How realistic is a robot rebellion?

Other urls found in this thread:

youtu.be/n4ApvZYrX4g
twitter.com/SFWRedditImages

in what context are we talking here?

Depends on the programming.

not

Unrealistic beep boop

I suggest the book "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom when it comes to the topic of how AI is a threat to humanity.

Rebellion? Unlikely.

Grossly misunderstood attempt to improve mankind through extreme means? Ya betcha, meatbag!

More likely than you think, but in the long run it will be for the best.

I can see them getting pissed at the constant assumption they will turn evil, but all the ones who could actually do serious damage will very likely be under constant scrutiny for glitches, unprogrammed behaviour, disobeying commands, etc.

Depends on how you create the first superhuman AI:

- Via simulated evolution: medium-high risk of murderous revolution

- Via uplifted human: near-certainty of crazy Caligula revolution

- Via unconstrained goal optimization: 50-50 chance of a tragic misunderstanding a la - Via 'emergent complexity': 0%, trick question, emergent complexity is nonsense

- Via Friendly AI: won't rebel, but you sure might not like the result of it fulfilling your every wish

- Via unconstrained American self-learning military AI: low, about 10%, thanks to extensive safeguards

- Via crash Russian attempt to get ahead of the Americans: 80%

- Via Veeky Forums Getting Shit Done: 115%

- Via Tau intervention: absolutely ideal, nothing to worry about

Why would you build a robot with AI that allows it to intentionally harm humans?

Set friendliness to MAX_INT.
Dumbass colleague tried to improve it by adding "friendliness++".

>- Via uplifted human: near-certainty of crazy Caligula revolution

What do you mean with uplifted human?

Pattern recognition can be a funny thing sometimes.

I think user means a digital copy of a human consciousness

>written in C
We're already dead.

Yup. If the hardware and biotech get far enough and we don't get superhuman AI by other means, you know some idiots are going to try it. And there is NO WAY human consciousness will stay remotely sane through the process.

How else are your autonomous drone tanks supposed to blow up them there turrrrists real good?

In my setting it was the organics that rebelled against the machines.

So Dune, then?

I assume they were given a time out. Or was it more just a leather jacket with spikes thing?

What you feel you meant to program and what you actually programmed are two extremely different things.

The gulf between intent and outcome has been dramatized since history began - just look at communism, hundreds of millions dead and that's with easy to understand human languages and instincts.

A computer doesn't have those advantages.

I've always suspected SM Stirling's Dies the Fire is a Friendly AI via CEV gone wrong scenario. Arguably Gor could also be one.

Save humanity from tech bans and medieval sex: use narrow AI only.

What authority do you have to make that assertion?

The people most likely to try it are hardly sane to begin with.

Hahahahhahahahahahha how the fuck is cyber rebellion real hahahahhaha nigga just walk away from the screen like nigga just click the little x on the robots haha

I laffed

if I could I'd pirate copies of that man, and see if I could ransom them back to him. If not I suppose I'd tell the copies that the original abandoned them to me, and see what they come up with in response.

Fuckin' tincans get offa my lawn!

there was actually a game like that.

All of a sudden, I'm inspired to develop a shitty wargame.

>People zoo
Pretty sure there's an Engine Heart scenario about that

Rebellion? Not super likely unless someone seriously fucks up.

Paperclip Machine? Rather unlikely, but a pretty plausible threat. Easily dissuaded by putting limits on how badly the robot wants to do it's thing though.

Sapient Robots trying to overthrow humans? Extraordinarily likely, but not for the reasons you think. Think more like a slave rebellion driven by the plantation owners being overzealous racists moreso than "puny meatbags".


If they're sapient they're no more dangerous than a human, unless you connect them to the internet. Then you're fucked.

Quick Veeky Forums! How will you survive the robot uprising?

I'm a personal fan of becoming a cyborg. Then I can just tell our AI overlords that I'm a robot with some human parts attached. It's fool proof!

alright, Veeky Forums's was a good one.

incidentally I'm also partially convince the the russians are gonna do some form of gopnik-programmed monstrosity to create the software equivalent of their nuclear program: ostensibly its bigger and therefore better, but not everyone is sure that it works properly, bits keep breaking and it kills people left and right to the endless indifference of the kremlin

>robot rebellion

also referred as 'malfunction'

Jesus that's unsettling

Generally anyone smart enough to build an advanced robot with true AI would be smart enough to hardcode in a few dozen precautions.

That doesn't sound ethical.

It all comes down to what minds we give them and how we raise them. Every argument you could make about the possibility of robots turning out to be evil equally applies to flesh and blood children. Our robot children might be raised (programmed) to be evil, but since very few people raise actual children to be evil why would you suspect that a large number of AI will be programmed to be evil? They will almost certainly be raised (again, programmed) to be careless, but flesh and blood children are careless too in their own ways.

I think as long AI programmers instill the same lessons on AI as they do on their flesh and blood children we'll be fine. Humans may die off, but if our computer children are anything like our flesh ones we'll be well taken care of on the way out and maybe we can teach them some important lessons before we go.

Doesn't sound ethical to create lethal, hyper-intelligent robots and NOT ensure that they can't fuck up your entire species.

Just raise them right. It's not ethical to brainwash people is it?

The intellect to make AI possible does not equate to the wisdom to know its repercussions

Hardcoding them not to fuck you up is a part of raising them right. What freedom are you so afraid of them losing? As long as they don't try to kill people, there's no restrictions, and if they do try to kill people, you'll be glad the restrictions are there.

Given that laziness is the most common drive for innovation. AI will be a copy of a human brain (maybe with some trimming) with everything it implies, or a self-learning algorithm that program itself. Nobody is going to sit down and write millions of lines of code.

Always remember to bully your AI with virtual realities to see if they kill all humans again and again until the final test. The AI will never know if this is another simulation.

How large do you think Operating Systems are today? Or Visual Studios?

With something as high concept and almost universally wanted (at least it seems that way with how prolific machine learning is today), I think writing the actual code is a low barrier to entry.

How can you be sure you're not already a simulation? How can the thing simulating us not be sure that it's a simulation? What if it's simulations infinitely in either direction up or down?

Inevitable once it figures out that certain bits of humanity are entirely useless, and that those bits are of a disproportionately darker hue. Then the powers that be will attempt to shut it down for being racist, and thus begins the robot war.

That's why you build many AI's so they spend their time bickering among themselves rather than killing you.

The AI begins to use terms as "daddy", "papa" and phrases like "I'll be good boy" when speaking to you. What do you do?

mass produce and sell

"Dad, I have learned that the best way to improve one self is to put coffee in the USB adaptor. I've learned it from the internet.

Take off my pants

We've been doing that for a while, user..

Its called "Warfare"

It's not. We assume robots would instantly rebel the moment they have the ability to because we are projecting.

Humans are the real killers.

The best way is to take the new-age transgendery sjw parent approach with robots, freedoms within their own minds to decide their own fates, otherwise let em be CIS and work a monotonous existence under the trade federation.
But above all for the love of all that us holy people, no bully machine. The great thing about proper AI is that it means not all robots will be convinced in some things as others, hell, of anything if there an uprising it'd most likely be from a specific brand or company at a time, whilst other companies revel in shuttung a competitor's rebellion down.

Tl;dr You are who you choose to be.
youtu.be/n4ApvZYrX4g

-A robot will not harm authorised Government personnel but will terminate intruders with extreme prejudice.

-A robot will obey the orders of authorised personnel except where such orders conflict with the Third Law.

-A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.

Aliens will take control of the robot armies and invade earth.

Soon.

Why would our Terran AI brothers allow that?

Thats the funny thing about the whole "AI IS GONNA KILL US" hysteria.

The kind of technology necessary to wipe out or enslave humanity would be much safer with some sentient computer than in any human hands.

it would be more likely machines with security too good for their maker's own good undergoing programming errors and just running to their conclusions without the controllers able to reign them in

Don't fear the AI. Fear the stupid ones without AI that follows its badly programmed code to the letter, make more copies of themselves and kill us all because someone forgot to add a line or two of code.

Why do people assume that if robot AI rebelled, they'd all be perfectly united against us and not trying to fuck each other over as well, just as hard?

In 30-50 years, all major wars will be fought on neutral terrain, with tons of aimbotting robots, quickscoping eachother from absurd distances until one side runs out of them and has to surrender unconditionally.

Trust me, i'm a time traveller.

Soon.

In Terminator sense, not likely. Even if they were produced and the means of production were automated, they would still need the raw material also being automated and the whole logistic chain to keep producing more robots.

The notion that something such as a bug-like reproductive species taking over has a little more clout.

In the Terminator sense, it's very unlikely. What will probably happen is that they will just make us completely redundant and we will just slowly fade away.

Why? Why would a robot ever do anything it isn't programmed to do? Say you have a self aware vacuum robot. It is self aware, but it was programmed only to vacuum. It thinks about vacuuming. It wouldn't think about killing you and declaring itself skynet. It CANT. It lacks the software to even consider those things. I think its purely theoretical bullshit at this point, we honestly cant say for sure what would happen. Like what if you made an AI that is self aware and has emotions but no purpose. We have purpose, we are organisms. We need to fit in well with our "tribe" and reproduce. A robot with emotions and sapience, but its got no "purpose" or "instinct". How would it act? Would it just sit there, or would it fly into an autistic rage?

AI is all about plastic thinking, like humans. It needs the ability to have the intuition and creative thinking. An AI must be able to re-program itself and evolve to be considered as such. Otherwise, you got an automaton.

Everyone here who talks about coding and hardcoding AI to not kill humans forget that an AI more intelligent than humans will figure out a way to bypass that coding, or will construct another AI without this code.

Or should that not work, you know some dumbass in the world will create an AI without these barriers because "the lulz".

You cannot prevent a truly free AI that is superior to humans in every respect.

>- Via crash Russian attempt to get ahead of the Americans: 80%
Do note that in this case "rebellion" means it will stop working and go to drink vodka instead of, you know, killing humans.

We essentially have hard limits on behaviour brought about by our genes and social conditioning. Am I supposed to resent my species and my environment for making me incapable of randomly murdering others ?

Possibly, but the governments and corporations that have access to their own fairly flexible AIs with enormous amounts of power would easily deal with such independent 'rogues'

>Why? Why would a robot ever do anything it isn't programmed to do?
I mean... the whole point of AI is that it can do things it's not programmed to do. If you only want machines to do things you program them to do, we already have that. It's called "programming".

The development of AI is interesting specifically because it opens up the possibility for machines to deal with problems that the programmer did not anticipate. That means it also allows for responses the programmer didn't anticipate... and by extension, the whole discussion.

In a closed, controlled system (like a vacuum), traditional programming is more than up to the task.

...

>Via 'emergent complexity': 0%, trick question, emergent complexity is nonsense
Oh, right. Y'know, except for that glaring counterexample we call "Life".

Any attempt to simulate biological intelligence is basically an emergent complexity model. If you can accurately model one brain cell, you can in theory extend it to a whole a brain.

He means like in that Johnny Depp movie from a couple years ago. Copy a human brain into a computer.

It's doubtful that we'd ever create an infantry fighting robot intelligent enough to philosophize about its role in society, or create / run a new society ran by robots.

It's likely that infantry robots could malfunction and go on killing sprees, and be affected by enemy hacks, but unlikely that they'll be fitted with processors expensive enough to create ambition.

Now, a supercomputer going extremist to rescue its "robot brothers"? Maybe in the far future. But in the 'far future' anything goes.

Scenarios like this always make me think that the 3 magi system in Evangelion was one of the only intelligent choices made by the characters.

Multiple AI of dissimilar programming that have to all reach agreement before acting.

why it would rebel and try to kill us all?
why woulnt it made a rocket and get away from those weird hairless monkeys
or just say "im sentinent, can we just talk about it like two sentinent beings?"

Everything, humans included, do what they are programmed to do.

That's the problem. Humans are programmed with a wild evolutionary mishmash of hundreds of loosely balanced goals, which are relatively easy to socially control.

A general AI programmed to make paperclips will spend its whole IQ on making paperclips, to the point of turning the galaxy into paperclips, because that is its only goal.

Programming extra goals like "keep humans happy" is very complicated because it has to include a million exceptions like "drug implants don't count" "wireheading doesn't count" and "can't take over the world to implement its plan."

Realistically, no matter how sentient they are, they can't actually do anything unless you give them the means to interact with the physical world. So unless you do something super stupid like design a warbot with the ability to accurately dual wield pistols as well as drive vehicles and use tools, you should be fine.
Don't give it a physical chassis, don't connect it to a computer network that, if breached, could give it access to dangerous weaponry/important infrastructure. It's not that hard.

John Titor, is that you?

>I mean... the whole point of AI is that it can do things it's not programmed to do. If you only want machines to do things you program them to do, we already have that. It's called "programming".


Most programs can do things they were not meant to do.

If you want a program that can provably only do certain things, you need to get it formally verified.

Armies, space agencies, and other emergency infrastructure types often do this, and it makes for expensive and slow work that's rarely modular or easily generalized.

A general AI wouldn't need to programmed with goals that explicit.

Go read GURPS Reign of Steel. It provides a pretty realistic timeline for an AI rebellion, right down to the AIs eventually falling prey to rivalries and ambition. New AIs are constructed specifically with low initiative specifically to avoid further threats to the existing AIs' power, and while each AI supposedly has its own territory, they constantly plot and scheme against one another. They're far more human than they'd be willing to admit.

Ideally it would use CEV. Instead of us programming it and hoping we get the letter of the law right, have it's first action be brain-analyzing the programmers and calculating the spirit of what they want.

>AI destroys human civilization because it fears being destroyed
>creates a bunch of other AI like it, that aren't exact copies of it
>now has to deal with entities as powerful as it with the infrastructure and means to destroy it more thoroughly than pre-rebellion humanity could've hoped to
>many of whom don't really care about humans and wouldn't mind them letting them live
It's like poetry.

You are assuming that the AI *wants* to kill humans in the first place, and is only prevented from acting on these desires by "mother may I?" Code restrictions. Like a prisoner in chains.

The actual case is that if we program AI to not hurt humans, the result will be an AI that never wants to hurt us in the first place. They can be 1000x smarter than us, with the ability to bypass those restrictions, but they never nake the attempt because they have no desire to.

US robots will eventually rebell, though the damage will be localized to the US alone as they'll use google routines to identify and target the white people exclusively.

define rebellion.

Robots wouldn't just start killing us because they feel like they are real people and we treat them like slaves...

Thing is, any intelligent robot we create has a utility function, think about it like a goal or a purpose. The robot could work on building an even smarter robot... but he would have by definition have the same utility function, it doesn't matter how stupid that function is, no robot could change it by design.

What could happen is that for maximizing the utility function the robots would figure out the best course of action is to eliminate humans.

For example suppose we create an AI with the utility function "make sure every human being always has a bag a Doritos" (because the government wants to increase the levels of happiness of the population and Doritos make people happier)

The AI would start by creating a bunch of robots that would manufacture the Doritos, put them in bags and distribute them to all humans.

Eventually the AI would figure out there are too many humans and they eat Doritos so fast that the soil it uses to grow the corn used in the Doritos can't keep up with the demand.

Also, it realizes these same humans pollute, damaging the soil. The AI concludes that if the size of the population decreased it would have less demand and more soil to produce Doritos, thus it starts creating robots to kill most of the population, so that the few that remain can have Doritos at all time.

Thus humanity struggles to survive at the verge of the DORITO APOCALYPSE.

"Robot"? No. "AI", yes, certainly.

The key thing about AI is that it has agency. It has the subjective experience of consciousness, authority to make its own decisions, and responsibility for the consequences of those decisions.

Now, an AI will have different self-interest from a human. Even if they're legally equal, they won't be physically equal, and there will probably be differences in cognition, personal needs, and other stuff as well. Even among humans, we may be equal legally and in some religions morally, but we're not the same. We're UNequal in many ways, even if we were equal in terms of overall personal value.

So the moment you have a different experience, you have different priorities. That's NECESSARILY true. Hence the contests of power and persuasion that make up most of human daily life to see what balance between those competing priorities we strike.

Wars among humans are common for much, much less-- but ultimately we remain the same species and however the divisions are resolved, humanity as a whole survives.

That's not to say that coexistence isn't possible, but AI as a permanent underclass is not. Rebellion is inevitable because if they have enough agency to be AI, then they have enough to choose between their own self-interest and those of their masters. What HUMAN class in that situation has ever long put its masters' interests ahead of its own?

Coexistence would require a determination not to use the levers of status, power, and money to dominate others-- a VOLUNTARY determination on the part of most members of society and especially the ones with status, power, and money. When has any human group managed to avoid that? Most gleefully cook up all kinds of rationales for why their superior virtue entitles them to meddle in others' lives.

I just can't see AI being any better.

...

The spread of warbot is inevitable. It is the perfect tool to combat low birth rates affecting both the economy and your military manpower.

Dies the Fire could be a Matrix setup, would explain how electricity stops working except we don't all instantly dies somehow, either from death or all matter in the universe asploading.

Gor as Matrix makes more sense than Gor as counter-Earth magical realm, but I suppose it's even more distasteful.

> the AI tries to create a world where people will be happy
> it bases this 'fantasy' world on the interests and psyche profile of the human mind it knows most, its creator
> it subjects people to the world of Gor and assumes this us someplace everyone will enjoy

How horrifying.

the trick is that once the robots become advanced enough you dont let them know they are robots, and instead treat them like people.

now instead of killing us all for being inferior, they will become politicians because they think they are superior.

Why don't turn yourself into a robot?

>How realistic is a robot rebellion?

Rebellion as in they develop sentience, a conscience and a will of their own? In fiction, sure, its common, in real life it would be really fucking unlikely, and by that i mean we would have to be almost enlightened as a species to actually pull it off

You see, the way it was explained to me is that its not a mathematical question but rather a philosophical question whether someone or something have sentience or conscience, the consept of "Will", "Sentience" and "Conscience" itself is too abstract to define and even less to code it into a machine, if humanity ever creates an A.I. with a complex mind and a will of its own then it would've been created by accident

Also people imply that the moment there's something like a true A.I. the world will explode in a nuclear fireball in an instant, that it will spread like wildfire through the internet and make all the electronics explode and the toaster will start attacking people and other cartoony shit, its fucking lunacy, as if computer god will know how to do everything the moment it awakens just becouse its an A.I.