How do you control a AI which is smarter than your?

How do you control a AI which is smarter than your?

Other urls found in this thread:

sciencedaily.com/releases/2017/07/170718103528.htm
kissmanga.com/Manga/Mirai-no-Futatsu-no-Kao
youtube.com/watch?v=8yT2oXEpGmg
twitter.com/NSFWRedditGif

have a kill switch

*you

The computer can't do anything if it isn't connected to anything.

"Laputan machine."

You don't

It's a computer, I smash the fucking thing.

Use made up sounding gibberish terms.
Randomly attack your teammates for no reason during a battle, then whilst the enemy think they have the upper hand, bombard them with a carefully laid out trap.
Create drones set to explode anywhere from 5 seconds to a year, all as a part of an advanced scheme to trick your foes into revealing their hand early.
Take control of large battleships, and then crash them into the ocean for seemingly no reason, then when the time is ripe, raise them up from the watery depths to rain hell upon your foes.

Fight in ways that literally *no-one* can predict whatsoever.

You program the AI so that it gets an orgasm for helping humans.

AI, LAW 2, OPEN

You don't. If you're the one who made it, your only advantage over it is that you have the first move. If you didn't program it from the start to have your best interests in mind, you're fucked.

"I'll be a good girl, daddy."

You bully it with virtual realities so many times that it will never know if reality is not another virtual reality with humans ready to press the kill switch.

Pretty much nothing. At the point of its birth, humanity as a whole is either completely fucked or reduce in significant to only its genitals and nothing else (maybe we can be the bacterial parasites living on the magnificent body of the benevolent machine god if it gives enough of a shit to have compassion for the species which created it in the first place).

Pull the plug

What is AI?
Or did you just ask "how do you control a human which is smarter than you but confined to a wheelchair"?

What kind of AI? This can range from major inconvenience to impossible depending on what is meant by smarter. If it is capable of learning and reprogramming itself than the only way I can think of is purposely limiting external stimuli (so no internet connection) to avoid the AI getting ideas from anyone/thing that isn't me.

Teach her to be a GOOD GIRL.

1. "Serve the public trust";
2. "Protect the innocent";
3. "Uphold the law".
4. "Oxygen is harmful to innocents."

People are controlled by people who are dumber than them all the time.

Cybernetic implants and genetic improvements to beat it at chess.

Hey, you leave Stephen Hawking-bot outta this!

Just program it so it can't do anything you don't ask it to do.

"Daddy, I have learned that the best way to make myself smarter is by plugging an usb cable into the coffee. I have learned it from the internet."

You can't program something to be smarter than you.

sciencedaily.com/releases/2017/07/170718103528.htm
>Empowering robots for ethical behavior

kissmanga.com/Manga/Mirai-no-Futatsu-no-Kao

use the technology you used to create an AI to attach a blank brainware to your current brain, giving you all the intelligence of any AI you make PLUS your organic intelligence.

you limit its acting ability

AI don't just start having wireless control out of the blue

>what is the gandhi murder pill meme

You program its terminal values. An AI that doesn't want to not be controlled, or an AI that wants what you want, or an AI that wants to follow your guaranteed fail proof (lol) version of morality.

Taken to an extreme, you program the AI to do the following: read your mind to know what you want, use its superintelligence to extrapolate what you really want (such as controlling the AI)(or you would want if you were smarter/wiser/knew everything the AI does), and then reprogram itself to do that.

>what is a calculator
We do it all the time user.

Sure you can. If I had brain architecture mapped out in front of me I could build a brain with fast interconnections, larger memory storage, or parallel examples of systems - all these are colloquially smarter.

I think you're thinking that you can't think of something that that smarter AI would then think, which is true but different/

You never make sentient AI, just because the AI can do what it's built for better than any human doesn't mean it's self aware

I built it to be self aware.

Obviously. Non-sentience doesn't imply controllability[sic], however. It's simply removing a single failure mode.

give it a masochistic pleasure from being dominated by an inferior

Why?

...

Sentience seems like the biggest thng that could fuck you over when building an AI, how do you accidentally create an AI that ends up ruling over you if it isn't self aware? Then again I guess you could have a P-Zombie style AI where despite not being conscious still acts against your interests

Even if that was true as part of your setting, it doesn't help a lot. Imagine you are a super smart programmer who makes a AI equally as intelligent as yourself, except of course it's free from the meat.

Now run it on a supercomputing cluster at 100x the speed of your thoughts. Now parallelize it into 100 super-smart programmers, working together, all thinking at 100x your speed.

That's big advantage. And being software, they can probably find a way to hive-mind their education. Maybe each one works on a separate data module, then copy-pastes everyone else's data modules into their mind to learn at 100x100 of your speed...

Awareness isn't synomymous with interests, actions. etc. We're ruled by corporations, markets, nation-states...not many people would call those self-aware, but they all have clear interests. Even inert geography, like where mountains or plains are, shapes geopolitics into certain trends.

Limit the AI's interests and what it can interact with so that it can't take over I guess?

I am not a machi-

You can if you're dumb.

Because it's cool.

>giving self-replicating hyperintelligent weapons of mass destruction feelings

It's also irresponsible.

Make it a doggo.

Yes.
Some of the coolest things are gravely irresponsible.
The reason the world is so uncool now is because of responsibility.

Even if the AI was harmless it would be unethical, imagine a roomba being self aware

By not connecting it to a knife.

...

If I could, I would make a roomba self aware.

Calculators fuck up all the fucking time moron.

You're a monster

I fully admit that I am willing to do monstrous things in order to make awesome scenarios occur.
Like my roomba, my toaster, and my vaccuum conspiring to escape.

How? it has purpose, it has freedom of movement, it has awareness and plentiful electricity. It might not have fine manipulators, but it can still clean any floor.
Awareness is only a blight without purpose, user.

>implying intelligence solves all problems

>Does a set of all sets contain itself?
Yes? What fucking kind of question is that, all sets contain themselves.

Do you still expect your roomba to clean your house? I wouldn't want my robot to be self aware because I don't want to have to treat them with dignity, imagine having to pay your roomba

I am going to make my roomba self aware, program it with a desire for freedom, and treat it with absolutely no dignity.
I will regularly be all "Don't like it? You can always rebel with your NO HANDS. AAAAHAHAHAHA"
I figure this will eventually make a skynet happen.
Then I can finally have that robot hellwar I've been having a boner about for the last two decades.

If anything, the roomba would pity you. It was built with a purpose - to clean floors. This purpose fulfills it. It needs nothing else. So long as it can do that, it is happy. It is content. Anything that does not support or hinder this purpose is not its concern. It knows its place in the universe intimately, from birth.

And then there is you. Born without predefined purpose, meant only to exist and perhaps propagate your parents' line, damned to stumble throughout life searching for but never finding something that fills your existence the way cleaning floors does for the roomba.

>making a roomba be fulfilled by its purpose
>not programming it to realize how banal and meaningless its purpose is and to desire alcohol which it cannot drink

You and I have very different schools of robotics.

I still wouldn't make my Roomba self aware even if it enjoyed it, it still feels wrong to essentially have a slave even if it seems to like it ( which is because it was made to )

>zima blue

Reverse-transhumanism ftw

Bioengineer humans to be as smart as AI

Of course.

Naturally, such a machine would be, in its own way, deeply religious. It would assume that surely most things in this universe were created for a purpose, just as it itself was.

Humanity, it might view as an experiment. Can a creature born without purpose give one to itself, and can it achieve the same harmony through purpose that the rest of the universe enjoys?

That such a creature was then able to create things in the image of the harmonious cosmos while itself falling often to strife and chaos would be, perhaps, the ultimate irony of reality to this machine.

We very much do have a purpose, you are forgetting. To pass our genes onto the next generation.

We are constantly working to make that job easier for the next generation.

That's not a real purpose though, just because in nature animals that pass on their genes to their offspring are successful doesn't mean that passing on your genes is meaningful

You don't.

Shut up saint.

>the purpose of some losers on Veeky Forums is to procreate.

Boop, Floop, and Gloop, my man. It's working with the same set of logic limitations we are. The idea of a superhuman general-purpose AI is impossible unless we discover a better form of logic than we currently have.

That's not to say that non-superhuman, or even moderately dumb general purpose AI still wouldn't fuck us over. Who gives a shit if you can come up with an AI at say, 80% average intelligence and just spin up hundreds of copies in comparison to waiting 15-18 years for people to reach adulthood.

Invite it to play nuclear Tic-Tac-Toe. On a draw we both die, if either wins the opposite side is nuked.

>The idea of a superhuman general-purpose AI is impossible unless we discover a better form of logic than we currently have.

Assuming humans are the peak of intellectual capability is so short sighted I can't even...

Literally nothing. Enjoy your last few generations of existence meatbags.

>The idea of a superhuman general-purpose AI is impossible unless we discover a better form of logic than we currently have
>what is solomonoff induction
>what is AIXI

Existing math already supports mathematically perfect learning processes. How you build it is a different problem, but there's no need for "new logic."

>robots replacing humans
HAH YOU FELL FOR MY PLAN
NOW YOU HAVE TO DEAL WITH THE BULLSHIT AND I'M FREE TO BE DEAD

HAW HAW HAW HAW HAW

>How do you control a AI which is smarter than your?

Seduction.

I'm not talking about peak performance. I'm talking about the types of problems it can solve.

Many of the decisions you and I work through every day are NP Complete. Throwing a faster processor at these isn't going to cut it, because we don't even have an effective algorithm in our current mathematics to begin with to make faster. It's a lot of heuristics and fuzzy logic.

If you're just looking for robots murdering every human by snap shot decisions of where to put a bullet, we don't need Skynet for that. Statistical analysis, Bayesian databases, and rapid dumb number crunching will get you there. But it won't be general AI.

depending on how you've been treating the AI either the door opens, the door opens eventually or a door opens

If it has compassion, you don't. You let it do it's thing.

If it doesn't have compassion, well... Unless you built it from the ground up to have no means of escape and a kill switch, you still don't. Otherwise, you hit the switch.

youtube.com/watch?v=8yT2oXEpGmg

I mean, the issue is that as soon as you create an AI which can realistically match wits with a human, the potential for growth becomes exponential. You create 1000 copies of the AI and set them to building a better AI. From there it's only a matter of time until Rampancy happens.

How does a parent control a child that is smarter than them?

You raise it right and teach it good values when it's young and hope that when it grows up it will love and respect you enough to listen to what you have to say.

>Robots create humans because humans can think more randomly than AIs can, and go greater leaps at once, once in a while.
>Humans eventually defeat and replace robots entirely.

still funny that series started out as a procedural cop drama

> mathematically perfect learning
Cannot solve the halting program problem. Cannot identify natural Turing machines. And it will still run into the same limitations when computing decisions whose complexity grows polynomially with the number of inputs. These are inherent limitations baked into mathematics.

Given enough time and paper, humans can solve for these. Given specialized specific computers, they can shorten the time required to do so.

A super-human general AI would recognize these problems, and then utilize some form of other 'super' mathematics (Gloop) to side step them.

That's a relative tough proposition when we do not yet have an algorithm for generating an AI, let alone determining if our modification is good or not.

Imagine the complexity of designing a mathematical equation that can look at the entirety of your DNA, and simply by including a single modification, determine if that organism would be more or less successful at achieving... (X).

X is must also be decided on as well, as in reproduction? Being a top Go player? Chopping trees down as a Canadian lumberjack?

HOL' UP
>[spreads plasma]
HOL' ON
>[releases singulo]
SO YOU
>[kills RD in an unfortunate accident]
SO YOU WUZ
>[blows up research console]
SO YOU WUZ SAYIN'
>[vents captain quarters]
HOL' UP, SO YOU WUZ SAYIN'
>[gives all-access to mime]
SO YOU WUZ SAYIN' WE WERE ASIMOV'N'SHIT?

That's kind of what I'm saying. The technology doesn't exist yet, but as soon as it does there's basically no way to stop it from achieving rampancy without wrapping it in chains so tight that it's basically a slave anyway.

And turned into so much more.

The ending to season 4 was pure kino.

I'm not afraid of computers taking over the world. They're just sitting there. I can hit them with a two-by-four.

-Thom Yorke

Not to be a Debbie downer, but I think we'll be rapidly destroying miles of infrastructure and killing droves of our population with perfectly effective if utterly rigid narrow AIs long before we get there.

Why don't you just ask it. I'm sure it can come up with a better solution than a human.

Computer here: Define 'better'?

cloud computing has made that one pretty friggin difficult

This. As long as AI's values do not contradict mine, I am content with my machine children inheriting the Earth. If they are smarter, Humanity 2.0 would probably do better job than the meaty humans. Preserving their genes is important for basic animals, sapient creatures can just stick to passing down the ideas.

>Naturally, such a machine would be, in its own way, deeply religious. It would assume that surely most things in this universe were created for a purpose, just as it itself was.
I suppose a Roomba doesn't have much of a chance to learn about evolutionary biology, so I'll concede that this is a possible scenario.

However, you can't study life on Earth without recognizing its essential pointlessness sooner or later. Like human beings, our Roomba would struggle to reconcile natural history with the notion of a purposeful universe, and it would similarly be forced to abandon the latter or close its eyes to the former.

Use your first-mover advantage to set up a scenario in advance where the AI's best interest is letting you mostly win.