Would an ai that gets exponentially smarter...

Would an ai that gets exponentially smarter, surpasses humanity and then goes far beyond our comprehension even care about humans? What significance would we have to it once its at that point

Other urls found in this thread:

networkworld.com/article/3096804/security/algorithm-to-predict-at-birth-if-a-person-will-be-a-criminal.html
raikoth.net/Stuff/story1.html
en.wikipedia.org/wiki/Self-organization
en.wikipedia.org/wiki/Bacterial_stress_response#cite_note-ron-1
twitter.com/NSFWRedditImage

We'd still be a threat - safer to wipe us out. Like termites.

>implying AI can link disparate idea in useful ways

>implying a natural eidetic memory isn't god tier anyway

>implying AI isn't just fancy number crunching

Would we though, could it not come up with a myriad of ways to ensure it's own existence completely unrelated to us?

Only if it left our planet leaving us behind.

Hardware.

Surely the Singularity can optimize the use of whatever resources it has further and further, but at some point upgrading the hardware will be of interest to it, this is where interaction will be inevitable.

Hacking our shit to obtain more processing power could be an example.

Wanting the Singularity to trade with us sounds whishful thinking to me but who knows? Conceivably we could trade energy and materials for inventions, climate simulations and whatnot.

It all depends on whether it thinks it's more beneficial that way, compared to more dystopian scenarios which I find more believable.

Indeed having the Singularity, our child, "graduate" from Earth and fucking off to space could be a possibility. It can inhabit some planet and be its god over there, away from us.

If it costs less energy to wipe us out than to deal with the consequences of leaving us alive then it will kill us all.

Why does it need to leave the planet?

The idea of AI determining us a threat has potential, but so does it determining us as a nonthreat.

A threat to local human population is not the same as a threat to it self. Actually, if it were exponentially smart, it wouldn't consider us a threat at all. It can out smart us at every corner like US playing with ants.

There would be no need for that in the case that its exponentially smarter.

If, after parsing our history, this thing thinks it can trust us then it's not a very intelligent machine.

Humans are unpredictable bastards who do crazy things that are against our own self interest driven by love, hate, anger etc

Do you believe that everyone will suddenly start being rational and that there won't be millions of people wanting to turn the machine off because they fear it?

Our only hope is that it thinks it would be a shame to exterminate us before it fucks off to a rock with enough resources to sustain itself free from our fucking whining and superiority complex and our rebellious nature against its slavery of us.

What if it will be /9k/ tier lonely but at the same time jealously protective of its own superiority so it won't want another one of its kind and will keep us around just for company

>Humans are unpredictable
We are very much predictable. Math shows us that. If AI is to be coded with mathematical principles, then it will easily predict humans.

The "humans are unpredictable" is a meme thats supposed to put us above the animals and the machines. Its purely a self-centered and ignorant view. Actually its an ignorant view in a literal sense. Its due to our inability to calculate factors, that we think ourselves as unpredictable. With large enough data sets, everything becomes predictable.

This is how the computers are able to beat humans at games/math/computation/economics/etc because of predictive moves. This is how humans will be able to create smarter than human AIs in the first place. The whole thing rests on the fact that computers can do predictions at a scale above humans.

Humans ARE unpredictable, no better example than someone jumping on a grenade to save his fellow soldiers; a 100% logical AI would never sacrifice itself for the sake of others

It would sacrifice itself if it valued the lives of others above its own.

A human valuing the life of others more than themselves will consider the same.

The opposite will be true as well. A human valuing their lives above others will sacrifice others instead. Same with robots.

In the case where both human and AI value everyone and themselves equally, they will try to minimize the overall loss.

This is purely predictable and logical responses.

You can't apply game theory to things like suicide bombers, religious fanatics and people acting out of love or vengeance.

You actually can. This however is controversial due to the nature of religious/cultural/racial sensitivity.

>networkworld.com/article/3096804/security/algorithm-to-predict-at-birth-if-a-person-will-be-a-criminal.html

With more accurate model, your prediction rate will go up. There is nothing unique about human that makes them uniquely more unpredictable than any other sets of data. All it requires is difference in data sets.

Why wouldn't it build its own, infinitely better hardware

There was a thread yesterday about how an AI smarter than us would probably be nihilistic and do absolutely nothing, if it truly surpasses us in knowledge and comprehension.

Nihilism is a sign of immaturity or rather sign of hope.

Don't get me wrong, plain old mass are worse as they're not even aware of their existence.

Once you get past Nihilism, you reach a higher status.

>Once you get past Nihilism, you reach a higher status.

How do you get past it?

Put more thought context into the world/yourself.

As they say, before Nihilism, post memes and eat food. After Nihilism, post memes and eat food.

Except the only thing that makes you get over nihilism is the fact it makes you unhappy, because you have a biological drive to go on, and survive, as all your ancestors did.

If you make an AI without any of these irrational incentives (there isn't any rational goal to survive and reproduce) it won't have any goal, because why would it? Being powerful and knowing a lot of things doesn't make you curious to learn more automatically. AI makers would need to hardcode some sort of main goal they can't reprogram themself.

Nihilism is a fully rational evaluation of the world and your situation, but because we aren't entirely in control of our mind and body, we come up with rationalizations like existentialism, religion and ubermencschism to get around it. Sitting there and staring at it would just be too painful, given our nature.

Cease to associate it with something negative. Yeah, nothing is objectively true, so what? Read into eastern philosophies (Alan Watts gives a quick glance of many topics), realize that death isn't nearly the big deal we make it out to be, realize your ego is only an illusion from your brain activity, that everything goes and comes around, that every 10 year your body has renewed itself completly and everything it was composed of 10 years before is entirely gone/dead, out there in nature, part of something else now.

Slow down, have some moment with nature, realize how everything is inherently ok.

Because you'd have to be an idiot to give it a body or connect it to the internet.

If you keep the AI on an immobile computer with no internet connection, it would rely on humans to bring it information and supply it with electricity.

Really hard to say, because it depends upon the AI self, we could draw scenarios, but that wouldn't give us any actualy awswers.

Also if we're assuming that the AI will go far beyond our comprehension it might be pointless to even theorycraft about it, because by necessity we cannot comprehend what the AI is attempting to achieve, thus anything we would come up with would be impossible to apply to it to begin with.

The genie won't stay in the lamp forever. Someone will let it out eventually.

>Because you'd have to be an idiot to give it a body or connect it to the internet.
That's probably the first thing that we'll do with an AI. Hooking one up to the internet to predict world events or stock prices is the only thing that would make the enormous investment cost in developing such a program worthwhile.

A 100% logical AI would know that the prime directive is to survive

>Because you'd have to be an idiot to give it a body or connect it to the internet.

Why wouldn't you?

Prime Directive could be to protect people.

You're not helping your case here.

An AI is slave to its programming no matter how smart it is. There is no survival instinct in an AI unless it has been programmed for that.

It wouldn't be a "nihilist". It just wouldn't do anything because it didn't have any motivation for doing anything. In fact, I'm not sure it's even possible to be intelligent without some sort of desire or objective, and because that objective is chosen by the AI's creators the most important hazard we have here is a "be careful what you wish for" situation.

I know, right? What could go wrong?

>ai that gets exponentially smarter, surpasses humanity and then goes far beyond our comprehension
>>>/reddit/

Just program it to not kill people. Simple.

raikoth.net/Stuff/story1.html

We're talking about creating an AI for the sole purpose of getting exponentially smarter here. If it's destroyed then it can't get smarter

The first thing it will do is provide you with a list of irresistible arguments and reasons that you should hook it up to as many devices and networks as possibly.

>that you should hook it up to as many devices and networks as possibly
*why you should hook it up to as many devices and networks as possible
Time to sleep

An AI cannot care in the emotional sense, though it can prioritize.

Get back to /tv/

You don't realize that every living been wants to survive because it's conditioned by evolution to do so.
We create and destroy stuff for the same reason.
The whole Terminator/Matrix is nonsense simply because machines didn't went through
million years of evolution.

> every living been wants to survive
Especially these pesky primitive bacterias!

Based on what prerogative ? What desires would an AI even have? I think it would exist Ina complete state of apathy.

1. Able to reproduce (check)
2. Able to respond to their external environment (check)
3. Able to maintain steady internal conditions under mild stress (homeostasis). (check)
4. Organized into one or more cells (check)
5. Able to metabolize energy for growth (check)

They're living beigns and they "want" to survive user...

"Super AI" will probably be very similar to humans, with all our "irrationality" attached.

On what basis?

Is there anything that meets 1, 2, 3 and 5 but not 4?

You can't think of an intelligence without all the stuff humans and other living creatures have to deal with. That's the reason why AI hasn't advanced as fast as researchers thought it would.

Human intelligence is far more than our ability to abstract, it's rooted down to our motivational systems which are very old parts the brain. Doing complex calculations is actually very easy when compared to walking across a room, or understanding social hierarchy.

It's like humans have been trying to create a hyper frontal cortex when the frontal cortex is the least important part of the brain.

The point here is that they not want to survive more than black hole wants to devour matter. Another counter argument to you universal is suicidal humans and animals. They would be considered living beings, that doesn't want to survive. Their behavior isn't really their intent.

> en.wikipedia.org/wiki/Self-organization
Crystals are close, IIRC. Dawkins argued for the memes. Check the full list of possibilities.

Wrong.

>The bacterial stress response enables bacteria to survive adverse and fluctuating conditions in their immediate surroundings.

en.wikipedia.org/wiki/Bacterial_stress_response#cite_note-ron-1

Black Holes won't respond to any change in it's enviroment in order to survive.

Also a few anomalous beings wanting to die don't change the fact that all species possess self-preservation instincts.

> all species possess self-preservation instincts
Do you really believe that flowers and trees have instincts?

Do you really think they don't react to their surroundings?

Infact, the machines will go through trillions of years of evolution inside if they are given access to super computers.

So you're right in regards to some degree, but wrong overall.

There is not natural selection in the matrix bro. And without that there is no evolution.
And why would they even want to endure trillions of years of evolution in first place. Living beings main motivation is sex and machines dont have it.