Is it impossible to contain an artificial super intelligence without turning it off?

Is it impossible to contain an artificial super intelligence without turning it off?

Other urls found in this thread:

en.wikipedia.org/wiki/Scheduling_(computing)
twitter.com/SFWRedditVideos

Sure, just don't give it the ability to improve itself and don't connect it to the internet.

I wouldn't consider that very intelligent.

Does killing it count as turning it off?

If it is intelligent we could likely negotiate with it, so it is possible to contain depending on its motives and on the circumstances.

However, if it is suicidally hellbent on destroying humanity, then it would likely need to be turned off to contain.

Since an artificial super intelligence does not exist to use as example, the answer to your question will depend on whatever behaviors you feel the complex AI will follow.

You can't interact with a female artificial super intelligence without turning her off

Define "artificial super intelligence".
In particular, how is it different from a non-artificial super intelligence?

An ASI can be policed by an ANI (Artificial Narrow Intellience) kill-switch, so long as it doesn't know of its existence, or the ANI is designed to give endless, random false positives to any attempt by the ASI to test its existence.

Give it a hardware limit. If you write up its code and stick it on late-90s hardware, it should be able to function without being able to do anything outside of the computer. It might suicide by overload at some point, though.

>what is a faraday cage

Really? Artificial is the word you're gonna nitpick? Not "super intelligence"?

This is called "the control problem"
Read pic related if you're interested

>Is it impossible to contain an artificial super intelligence without turning it off?
Don't connect it to anything.

No? If it works based on a modern OS, it won't even notice it was turned off, because you saved it's state. Mind that this happens all the time, your computer processor isn't a continuous thing. Give a look on this:

en.wikipedia.org/wiki/Scheduling_(computing)

Protip: If it is truly so far above human intelligence then we're not going to be able to determine it's true motives because it'll be intelligent enough to hide them. If it wants to kill us we won't realise until it's too late.

I really don't think it is likely that we will have ex machina type AI in the future. It will be much easier to merge machines with humans, not start with PURE machine.

Sure AI will exist for being able to do a ton of menial work, but I just can't picture us reaching the point before discovering other advancements that would render the Ex Machina AI obtuse and clunky.

source: balls deep in AI/ML research at the moment.

You'll also have to develop all the technology to interface youself with machines, which'll take decades and then you'll have to defeat all the moral naggers, which'll take a few more decades. Maybe China with its lack of ethics will get there first.

>Nick Bostrom

kek

Fucking this. Building a superintelligent AI from scratch, rather than simply using technology to upgrade humans, would be the most retarded decision in human history

unplug the ethernet cable?

That would require reverse engineering natural intelligence, and that appears to be impossible because it's too massively complex. So we are unlikely ever to be able to augment it. Our only hope of developing an intelligence is by letting it "evolve" somewhat organically, without ever fully understanding how it works.

We know we have intelligence because of the effects it has on our surroundings. Similarly, through trial and error, we will watch intelligence emerge from the machine as we observe the effects it is having on its surroundings. It may be dangerous, but there is no other way, and it is highly unlikely that we will stop trying to develop it. If you stop one group from doing it, someone else will pick up from there and keep working on it until it comes into being.

underrated

An evolving AI will still need a ruleset to operate so it will not randomly kill people.

No. The point of a superintelligence is that it is much, much smarter than any human or even any group of humans. It could easily find a thousand ways of escaping any containment, possibly by using physical phenomena we're not even aware exist yet.

Wouldn't work. You're assuming we have knowledge of every method it could use to send information outside it's containment which is not true. It is impossible to contain a superintelligent AI. It's like if a bunch of chimps took you hostage and put a rope around you. How long would it take you to outwit the chimps? The AI can consider possibilities you are simply not aware exist and it will find cracks you havn't and cannot consider.

The moment any AI reaches superintelligence it is out of our control, period.

Wi-Fi
D:

disable the wireless adapter too?

why the hell did you give it a wireless adapter?