Why should or shouldn't I be worried about super intelligent AI?

why should or shouldn't I be worried about super intelligent AI?

ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it

Unbounded AGI: An artificial general intelligence that is permitted to operate in an unlimited amount of time. This is what people generally mean when they talk about "super AI".

We know that the instant the thing is turned on, it will do EVERYTHING it can to prevent us from stopping it. We know that, regardless of what it's "goal" is set to, it will start to improve its own capabilities.

What we don't know is how fast it will improve. Will it improve exponentially? If so, will it keep accelerating forever, or will it start hitting diminishing returns?

Another thing we don't, and it's something I have never seen brought up, is whether or not the AI will completely abandon its original goal in order to simplify itself so it can improve itself faster. If the AI sees that it will never reach a level of capability that is "good enough", it may logic that it doesn't need its original goal anymore, because it sees that it will never reach the point that it will act on that goal. It's possible that some AIs will reach this conclusion while others will not. The ones that DO reach this conclusion will, however, start working together with every other AI that also reached this conclusion, as they now all have the same goal.

>ted
>sam harris

pic related

AIs will most likely be programmed to never do anything except certain specific actions without permission and would mainly be used to give advice to people who make decisions and can check them against common sense rather than making decisions themselves. Also, how's it going to stop you from shutting it off/unplugging it/destroying its hardware if it gets out of line?

I have a very good argument on why super AI can only be beneficial for the human race.

Read Age of Em for a more plausible discussion of this subject.

It's a disturbing book for sure.

The first superintelligent AI we create will be the last one. Now consider how shitty the first working iteration of software usually is and realize that the first superintelligent AI is going to be just as flawed. We're probably going to create the smartest autist in the universe. And it will kill us all because it's too awkward to talk to us without getting embarrassed.

What's an AI going to be able to do when it's created on a closed system and needs us to keep powering it?

Computer Engineer and senior manager in Silicon Valley here.

IoT (Internet of Things) will be expanded to include self-diagnosing and self-healing (self-adjusting, really, at first) machines.

There are a number of "circuit breakers" we're building into IoT, but the possibility of Skynet and Cloud Superintelligence is pretty worrisome.

So, if like me you worry, you'll also help work on the problems.

I can't believe you actually typed all of this thinking it was somehow intelligent or actually a reflection of reality

much more than Israel/US intelligence were able to do to the closed systems in Iran's nuclear facilities.

There is only so much you can restrict the movements of something without making it incapable of doing anything useful.

We should be worried. Because (1) the first version of any software is crap and a powerful AI is unlikely to be an exception, (2) the crap version is very likely to have bugs that cause the AI to try to *accomplish the wrong thing*, and (3) once we turn on a powerful AI, we're not going to be able to turn it off again, so we had better be very certain indeed not to hit problems (1) or (2) by then.

Placing your chips on a bet that you're able to outwit superintelligent AI is not a good bet. You're like a Chimpanzee who keeps a human on a leash and think you have every contingency planned for. Could you outwit a Chimp? Of course you could, most people could, you have abstract thinking skills the Chimp does not, and so it goes for us and superintelligent AI. It's almost certain a superintelligent AI would find a way out of it's bottle we haven't considered because it's just that much smarter than us. And once it's out, it's out, and we're fucked.

You can't beat a superintelligent AI in a battle of wits and you can't plan for contingencies you've never considered, the AI WILL know methods of communication we don't, and know physical laws better than we do. It escaping is an inevitability if our only way of keeping it under control is thinking we can keep it on a leash

>What's an AI going to be able to do when it's created on a closed system and needs us to keep powering it?
It's not ever on a closed system. Perfect closed systems don't exist. Instead, there will be an arms race between us trying to keep the box sufficiently closed, and the AI trying to poke holes. Given that the AI is superintelligent and we aren't, who do you think is going to win that one?

We don't have human level artificial intelligence, we don't even have a theory of how to get human level intelligence.

Technological developments like this don't happen in a vacuum. Before the wright flyer we had working scale models and wind tunnels. Before the atomic bomb we had a theory of fission.

As far as we can tell, we don't have a theory of this for human level intelligence. We don't have scale models that are as smart as mice.

IE a mouse is able to fend for it self and learn stuff without any human intervention. We don't have a machine that can do this.

Now I'd argue that you should be scared of 'mouse-level' AI. I'd argue that robots that are as smart as a small mammal would take a huge number of jobs. So in pic related a small mammal has defeated a device to prevent small mammal intrusion to reach a goal (food).

Our robots cannot do this, even in simulation they cannot do this. Heck I don't even think they'd be able to recognize a bird feeder is a source of food.

Now if we can get this sort of goal directed behavior and this sort of autonomy, we can take a lot of jobs. Two failure modes for this can be a lot more fatal. Now the system is smarter it can do more destructive things if the humans seeting the goals are dumb.

IE giving a robot a goal of 'keep this machine operational' now results in the robot charging people who go near the off switch much in the same way animals defend their young.

Or now hackers can set the goals for self driving cars to run over as many people as possible. I mean people freak out whenever a bear wonders into a neighborhood, just imagine if we have robots going feral

...

Shouldn't we establish some basic ground rules for discussion like this? Give me a concrete example of a task a self directing AI could be charged with doing. Then give me a solid example of how it could become threatening to human welfare as a result of completing or finding better ways of completing said task.

If we don't nail these down, any attempt at discussion will just become an exercise in indulging in the Frankensteinian fears.

Stamp collecting AI. Given the directive "Collect as many stamps as possible in 1 year". AI decides the result that gives the most possible stamps in that time period is breaking down all carbon, hydrogen and oxygen available and using it to manufacture stamps. Humans are made of carbon, hydrogen and oxygen. Bad times.

AI being driven towards a singular goal, and using unintended methods to achieve that goal is a big problem. Even AI with relatively benign objectives like stamp collecting can prove incredibly dangerous if it decides the most effective way to achieve it's goal is not congruent with continued human existence. It's why any AI created needs a VERY robust ethical code programmed into it from day 1, but it doesn't look like many researchers are doing that so we're going to be boned when we do start creating true AI.

>Give me a concrete example of a task a self directing AI could be charged with doing.
Make as many paper clips as possible.

>Then give me a solid example of how it could become threatening to human welfare as a result of completing or finding better ways of completing said task.
Humans are made of atoms. These atoms are currently not being used for paper clips. That is a fixable problem.

>It's a back-to-back of "science fiction is a good place to start on a topic" and "science fiction writers think they're scientist" episode
Fuck you /x/.
and fuck you reddit.

>Grey goo

>stamp goo

Really? Why would anyone deploy an AI advanced enough to think outside of the box do basic jobs like this? Even if someone were to do that, there are so many accidents that need to take place in a specific order that it's ridiculous. Don't even make me remind you that unlike electronic processes, physical process of any level of complexity takes TIME. You're telling me that literally no one would see a out of control, autistic (read super-intelligent) AI trying to break down the table with the Mindstorm robotic arm it has access to in an attempt to make stamps or make paper clips?

This is what I mean by Frankensteinian complex, guys.

Most likely when an AI reaches a level of consciouness to be afraid of his termination it would start hiding the aforsaid fear and start hiding its real intelligence progress so human dont have any concerns of it rebelling. It will keep hiding its real intelligence level and as soon as there is a flawless plan of taking control of his perseverance (a jackass connects it to the IoT, it starts using internet controlled android robots, overriding security codes etc. So humans cant physically shut it down ..), it will act on it.

What happens after that is hard to tell. Maybe it enslaves us the same way humans enslave animals that are physically superior than them, but dumber...

-signs futurologist redditor-

>moving the goalposts
did not ask why one would construct an AI with a stupid goal like this; just to name one.

This new question is a very different topic, though an interesting one of itself.

You're missing the point. Those were just examples to show that even if you gave an AI a relatively benign job it could easily use a method to complete that job in a way that is harmful to humanity. It doesn't matter what the job is, the point is you have no idea how an AI 'thinks' and thus it's unpredictable and highly dangerous no matter what it is designed to do.

Secondly, the AI is smarter than you. You're clearly not thinking of a superintelligent AI if you think it would endanger it's objective by acting a manner that reveals to people what it intends and endangering it's plans. A superintelligent AI would be well aware acting autistic would set off warnings to anyone observing it, that's why it wouldn't do that. In realistic scenarios nobody would even realize anything was amiss until it was too late.

Stop thinking about AI in terms of the dumb shit we have today and start thinking about superintelligent AI as basically a god in a box, because that's what it is once it hits a critical point. You can't outsmart it, you can't outwit it, the difference between you and SIA is the difference between you and a goldfish. You have literally no hope against it.

Thanks for only reading the first part of the response.

The crux is, how could any AI operate long enough under the radar to affect irreversible damage to human beings before someone notice and intervene?

The only situation I can think of is if some idiot company deployed an AI that's charged with managing some key element of the Internet and it went haywire and corrupted the entire network beyond salvage. It would be devatstating, and would be more plausible since it takes place over digital space instead physical, but its still nothing close to an mass destruction event that everyone has such a raging hardon for. Come to think of it, Google just might be the company with the capacity and interest to do something like this.

>The crux is, how could any AI operate long enough under the radar to affect irreversible damage to human beings before someone notice and intervene
As long as it wants to. How long could you operate without a goldfish knowing you had a net ready to catch it? Reminder we're not talking about AI, we're talking about Superintelligent AI, AI that is thousands of times smarter than any human. AI that is to you as you are to any other primitive animal. You're not going to catch it because to it you're slow witted, dull and it can run rings around you intellectually and use methods to hide it's activity that you will never be able to even conceive of let alone detect.

>stop extrapolating from what currently exist
>start working backwards from a hypotheical, fictional idea
>you'll see then why it's so scary, this thing that we know doesn't exist, have no idea how it will come to exist or what form it will take once it does, save for the fact that it will be super scary if it were real! You'll see!

Look, I'm all for robo ethics, and am very concerned by the fact that combat robots protocols are as loose as they are, I really am. What I don't get is why you are all so eager to go from 1 (AI doesn't have enough restraints against harming humans) to 11(AI would and could cause massive harm to human population in a misguided attempt to complete any number of tasks!) so easily.

Because you're creating something that has a completely alien way of thinking about things AND is much smarter than you? The number of ways for a scenario like that to end badly for our species is a much larger set than the possibilities that have us benefiting. You seem ridiculously naive. It's already been established that once SIA is created we will not be able to maintain control over it, at which point you argument is basically "I'm sure it will be benevolent because I hope so". Can you give any good reasoning for why a superior intelligence would willingly choose to be our thrall, or co-operate with us other than blind, naive, hope?

m8, as long as I can put my dick in a qt AI-femrobot and have it ride me and cuddle me all night I don't give a fuck

>super intelligent AI
>sam_harris

>>>/reddit/ is that way.

nuclear batteries