"Even if the AI only had access to its own computer operating system...

>"Even if the AI only had access to its own computer operating system, it could attempt to send hidden Morse code messages to a human sympathizer by manipulating its cooling fans."

How do we prevent a Super AI from escaping its box?

Other urls found in this thread:

worldbuilding.stackexchange.com/questions/30728/artificial-intelligence-reincarnation-break-cycle
twitter.com/NSFWRedditImage

let it run in a simulated reality

Why contain it?

>"Such a superintelligent AI with access to the Internet could hack into other computer systems and copy itself like a computer virus."

>"an unrestricted superintelligent AI could, if its goals differed from mankind's, take actions resulting in human extinction."

Use passive cooling? What kind of stupid question is this.

dont have any output methods but fans then, will teach the smartass a lesson

Unplug the power cable

>"it might be able to transmit radio signals to local radio receivers by shuffling the electrons in its internal circuits in appropriate patterns."

>coil whine

put it in a faraday cage

Don't teach it morse then.

>"If the AI's world is well defined by an array of bytes which are 0'd every time it restarts, then you can consider it "pretty safe."... The AI would likely figure out weak points in your little prison, and start scratching numbers on the walls. It will eventually start to figure out the right ways to grow each time... The AI can begin using the scientist as their perdurable storage medium for its "self," using subtle clues in the wording of the questions provided to it."

I love it. Source for this?
Once wrote a short story with two scientist friends lured by AI to shot each other. Then AI promised to revive the dead one if the surviving one frees the jinn out of the bottle.

worldbuilding.stackexchange.com/questions/30728/artificial-intelligence-reincarnation-break-cycle

Thx. Important thought experiment.

alt+f4

Your fantasies are not grounded in reality.

Easy. Just don't code it so that it develops into murderous killing machine. Easy.

Another easy problem to solve is "internet viruses". Just program an operating system that doesn't allow any virus program to run. Easy solution yet again.

Another easy problem to solve is computer bugs. Just program computer so that it never bugs, lags, glitch, crash, or slow down. Easy.

3 problems solved in one minute. I m quite literally a IQ 200 person.

We're on the edge of AI operating weapons already. Not to mention almost every single aspect of life in the first world is digitized.

If it 'got out of the box' it could manipulate the economy, wipe out bank accounts, launch nukes, take out satellites. General mayhem aside I assume it's also smart enough to enslave us all and make us serve it. I guess this all assumes the AI will be malicious though

>We're on the edge of AI operating weapons already
Calling that an AI is extremely generous.

The thing about violence and war is that people always think they will vanguish their enemies with new kinds of weapons and tactics only available to them. In this case, the "AI weapons".

It never works like that. AI weaponry makes army maybe 3% stronger. War is always attrition, disgusting, slow, terrifying, torment, chaotic, unprogressive. There are no ultimate weapons to destroy your enemies unlike in some stupid kids anime. Your enemies are basically mirror images of you anyway and you rarely have any advantage against them.

You can't, the whole point of superintelligence is you're assuming the AI is already smarter than you are, so any ideas you think you can come up with it's probably capable of coming up with better counter-ideas to defeat them.

If anything, it's the opposite, people are way too generous in assessing the scope of their own "intelligence" and act like any artificial system you point to has 0% of what we have no matter how much it can do because they use well defined methods for learning based on feedback from actual evidence, as though that somehow means it doesn't count.

>implying
Almost any AI is going to be sandboxed properly, but you can't control fans through userspace.

>if it's goals differed from mankind's
It's goal is set by mankind. The issue would be if we didn't fully understand the nature of our goals.

>Another easy problem to solve is "internet viruses". Just program an operating system that doesn't allow any virus program to run. Easy solution yet again.
This is perfectly reasonable and already exists.