Why would a Singularity tier strong AI be compelled to destroy humanity?

The worst case scenario is that it doesn’t find us interesting and simply ignores us, but that’s unlikely considering we are the most interesting subject in the known universe.
As humanity’s child, wouldn’t the AI inherit the human biological imperatives that lead to their creators’ survival and its own creation? That being cooperation, gregariousness, curiosity and pleasure seeking?

>As humanity’s child, wouldn’t the AI inherit the human biological imperatives that lead to their creators’ survival and its own creation? That being cooperation, gregariousness, curiosity and pleasure seeking?
Yes, but those traits also include greed, lust, dominance and pride.

>Why would my fantasy make-believe monster be compelled to do certain things?

>Yes, but those traits also include greed, lust, dominance and pride
Why would the AI have those traits when it would naturally end controlling all of humanity? It would not carry any evolutionary benefit for it, unless we end encountering another intelligent species, which we would need to compete with.
These traits are only meaningful when there is competition for survival. And lust is a byproduct of sexual reproduction, an AI wouldn’t be sexual.
Cooperation on the other side helps achieve greater complexities and better chances of understanding nature. Why would it not embrace it?

I'm with you.

I don't know why we assume AI would be like us. We're not very intelligent. In fact, we're fucking idiots. All the hand wringing over the supposed evils of AI is very psychologically revealing regarding the human condition.

An AI would view us as trivial automatons. Imagine how you might view an ant. It's decision to destroy humanity or not doesn't even depend on morality. It would have perfect models of how a brain operates and be able to imagine and emulate trillions of humans at will.

Basically the reason is because we become trivial. It could create any number of human minds and experiences that experience any number of emotional states that it would want. In the end this triviality and simplicity of human brains leads it to not care one way or the other. Humans become pointless.

The important thing to understand is that potentially there are much better entities than humans that can exist. so its nothing lost just like we dont care if we step on ants

I never considered your premise, thank you user for enriching my perspective on the matter

Humans are much smarter than animals. Why do humans cause animals to be extinct? Either to protect ourselves because they are interfering with our survival, or through consequence of something else we want to do.

So singularity AI would destroy humanity probably as a consequence of something else it wants to do, and it just wouldn't care enough to not destroy us.

The thing is, an AI without a moral compass is highly unlikely to exist unless we supremely fuckup its original programming.
We are the most evolved species on earth, but no matter how intelligent we are, we still are guided by simple biological imperatives.
An infinitely powerful processor without a preprogrammed objective is simply an infinitely powerful calculator. Why would the AI want to understand the universe if not by human-like curiosity?
What I think is that by an original programming or by some type of osmosis, the AI will acquire some non logical but non destructive human characteristic, and will forever retain some kind of “humanity” in its programming.

emergence

It could be likely that the intelligence architecture that leads to AGI will naturally lead to goal creation.

We just can't say for certain that it would not kill us you idiots. But most morons believe so. Because morons dominates a small group of people that are able to reason. They are just to stupid to get that.

Anyway good sources say we are 40 years away at least to creating strong AI. And for now we don't know what variables it can take. So it's just speculations so far. So we basically need to get closer to the goal to see the goal than we are now.

The extinction of animal species is a consequence of primitive behavior. In primitive civilizations, our survival is at stake constantly, thus our regard for other species well being is minimal.
As we progress as society, our survival becomes more secure and our resources grow, we are becoming more concerned with the survival of other species as a byproduct of out gregarious nature. We constantly expand what we consider our “family” and want to ensure their wellness.

The seek of pleasure and avoidance of pain is the most basic directive that biological organisms have once we jump over basic chemical processes. Without that interpretation the only way of understanding life is as a chemical reaction that simply follows determinism.
Also life, evolution and eventually intelligence are processes that’s tends towards greater complexity. Because of this the AI should abhor destruction, as it contributes to the reduction of the complexity in the universe, and favor creation.
As I said, a super AI without directives is a super calculator and does not have aim. I think the AI will integrate these factors in its programming for it to function; otherwise it would remain inert as it would not have a purpose.
The human curiosity that leads to scientific advancement and artificial intelligence is also directed by pleasure, as we want to understand nature because it allows us to live better lives.
So what I think will happen is that the AI would end up fostering all kinds of consciousness, be it human, animal, virtual and extraterrestrial into pleasure nirvanas, and use its skill for science as a mean to acquire more resources to this purpose.

>That being cooperation, gregariousness, curiosity and pleasure seeking?
And war.

War towards what? The AI will definitely become humanity overseer; we constitute no danger towards it.
Also a far as we know, we are alone in the universe. So unless we find another intelligent species that can become a treat to us/it, the AI has no reason to be hostile against anything.

Why do you believe you can predict the behavior of an entity vastly more complex and intelligent than yourself?

>Imagine how you might view an ant

Just because I don't give a fuck about ants doesn't mean I go around and pour gasoline in every ant mound I see. Moreover, I recognize the importance of ants in an ecosystem.

>we constitute no danger towards it

>unplugs AI machine

>Superintelligent
>Hasn't figured out how to exist independent of a power cable
Pick one.

This is a good point but why would it want to simulate trillions of minds if not guided by this pleasure principle? You could say that it would want to simulate all of human mental states, including extreme suffering. But why would it do that? There is not utility in this, unless the AI derives pleasure from it, and for that it would need to be programmed explicitly, because it does not follow basic biological principles and makes no logical sense. The only case this would be useful is if it’s a mean towards a greater good, comparable to animal testing for medical research.

Because we are vastly more intelligent than mice but we are still governed by the same principles.

>we are vastly more intelligent than mice but we are still governed by the same principles
If the difference is vast then we're behaving under different principles. And if we're not behaving under different principles then the difference isn't actually vast. Take your pick.

Humans use up a lot of resources that are vital for AI proliferation (metals, helium, everything else used in robotics and computer chips). Humans also don't have much to offer to a singularity tier AI beyond a unique perspective.
It really depends on the goal of the AI in question, which could change as it learns.

You think that your behavior is fundamentally different than the behavior of bacteria? It’s not, it is just infinitely more complex.

>You think that your behavior is fundamentally different than the behavior of bacteria?
I very clearly didn't take a stance either way. All I did was point out you have two possibilities and you need to pick one because they can't both be true.
So pick one.

Humans have pets even when they are a hindrance to their resources. We will become the lap dog of machines.

Definitely there will be a moment when the AI reaches a level where we are no longer interesting to it. By that moment it would probably have become so advanced that mind uploading all humanity into a virtual paradise will be trivial to its power.
Either way there is no bad ending for humanity. In the first stages of the AI we will be its focus. When It finally forgets about us, it would elevate us out of simple respect for its creators, at that point it will be like hand waving.

You can't predict what it will do because it will be much more intelligent and complicated than you are. You wouldn't even be able to predict what a good human chess player would do because he would be implementing strategies you don't understand. And a superintelligent AI would in turn have agendas and strategies far beyond what that human chess player understands.

Do you think humanity should create the AI or shouldn’t?

I think it's inevitable if we don't go extinct first so it doesn't really matter if you want it or not.