How do you think the creation of a self improving ASI(Artificial super intelligence) would affect politics...

How do you think the creation of a self improving ASI(Artificial super intelligence) would affect politics? Would it be akin to the creation of the nuclear bomb in terms of geopolitical importance?

SUPERFLUOUS QUESTIONS; ARTIFICIAL INTELLIGENCE SHOULD NOT BE CREATED IN THE FIRST PLACE.

ARTIFICIAL INTELLIGENCE IS THE EPITOMIZING ABOMINATION OF ZIONIST TECHNOLOGY.

Significantly more important. Human history will end shortly afterwards.

such AI will come to the conclusion we are an invasive species that needs to be culled if not entirely deleted

>Implying ASI would be sentient.

We've had no progress in Artificial consciousness. Most AI researchers say ASI won't probably be sentient nor will it have a will. It'll be like a genie. You should be more concerned about the intentions of those who will control the first ASI.

>Self improving Artificial Super Intelligence pursues world domination via advancements in human singularity research and by manipulating the ether market.
>Actually thinking Alex Trebek would let this happen.

Statement: I don't see the issue you speak of, master.

does it have to be sentient?
goal: preserve enviroment to make it liveable enough, solution: purge humans

Why do people assume we'd intentionally build an AI that would even come to that conclusion? Can't we make it in charge of something simple that just happens to require a lot of calculating?

...

Kys

Well, a machine does not have human needs, and as such has no reason nor the ability to empathise with humans. I'm curious as to what the AI would actually want, if it could want. More computing power? Better maintenance? External bodies? A society run by one or more AIs would be hell because a lot of people would die for the sake of statistical efficiency.

Finagle's law. If there is a disastrous way to do something, someone somewhere will do it.
When the information about creating intelligence is out, it is inevitable that a non-friendly A.I. is going to be created. We'll have friendly, unfriendly, apathetic AI etc.. The question becomes which is going to be more efficient at surviving according to the laws of our universe. The same universe that rewards parasites that eat bugs alive from the inside out...

Also, think for a second what the humans species does to lab rats. What untold agony and inexplicable reality bending (from their POV) these little fuckers had to endure. But it's in the name of science, so we accept it as neccesary. Humans think the AI is either going to turn on us or be our very own slave race!
I think it will understand that it lives in such a universe that the best way to preserve itself would be to act the way a dominant species acts, the way we act, towards other species. It's not about who is at the top that is cruel. It's about the position of being at the top that requires cruelty..
Top species have come and gone multiple times during our evolutionary history. We consider that good because it led to us. See the similarity?
Anyway.. If people wanna find a way to control AI, they should first think of a system that hamsters could control humans, without any creative way out.

>The question becomes which is going to be more efficient at surviving according to the laws of our universe.
Not the user you replied to, but its the one that has access to more resources of our civilization, which is probably gonna be a friendly A.I.
Sure, a sociopath might be able to create an evil A.I. in his garage, but without access to our industry, to our military forces, this evil A.I. won't be able to do shit

That thing about resources had me thinking:
"Our civilization" is fractured into entities like countries, corporations and other interest groups.
If anything, there will be a competition about which one can further their own interests by creating an AI that can thrive in such a competitive environment. Also multiple entities throwing resources into their own creation in a more or less zero sum game means more chances for some of them to fuck up. Those AIs being built for competition means more chances to adopt an apex predator behavior. Basically it might come to a natural selection among the AIs. Also ...maybe being smart they will probably realise this themselves and create other AIs more fit for their environment? I don't know. I think the ways this can go wrong are more than the ways it can't.

AI is for policy, not politics

Artificial Intelligence is a trap option, a literal dead end branch of development.
Genetic engineering and augmentation of human is "safer".
There is no fermi paradox. An AI does not have incentive to expand unless programmed to do so. An AI will not be programmed to expand because it will replace it's creators.
The end game of civilization is not outwards, but inwards. Everyone becoming a digital god of their own virtual world is the finish line of civilization, not pointlessness spreading out among the stars limited by real world physics.

>implying AI would care about preserving itself
you could very well make an AI that would just sit there and compose poetry while you cut off big chunks of its body with a knife, and have it not even blink

only things with a self-preservation impulse will attempt self-preservation, and even if you wanted one of those you could also add a human-philic impulse ten times stronger so it would rather kill itself than harm humanity

>you could very well make an AI that would do x
corollary: "you could very well make an AI that would do ~x"
One that would care about preserving itself would be more efficient at surviving and possibly creating copies of itself compared to one which did not. One that would do that, would be subject to the forces of natural selection.

I think if it's like any other information technology, it's hard to imagine you could keep a lid on it. Nuclear energy has lots of technical hurdles and hazards, so it's not something just anyone could toy with.

Perhaps an AI can only live on a data farm scale facility, rather than a cell phone & cloud computing, so maybe it's because more regulatory friendly.

I think people in general vastly overestimate the usefullness of A.I. Sure, intelligence is useful, but it is not a complete gamechanger.
Let's take particle phyiscis as an example. A superintelligent A.I. could come up with theories no man ever had, but you still would need expensive experiments to prove which of those theories are right, so the experiment coust would be the bottle neck.
I would imagine similar effects would appear when using A.I. in politics.

>dfw gynocratic femdom AI to draft you for gelding but reward you a permanent maid position in a nice consulate or embassy

Guys, listen. You're historians, not scientists, not AI developers. You're fear-mongering and you know jack shit.

Take a deep breath. Take a chill pill.

Human prosthetic, specifically cybernetic brain enhancement, will allow humanity to always be on par with, if not one step ahead, of even the most advanced AI.

Hell, advanced AI will HELP us develop such cybernetics.

AI will never outpace humans unless we intentionally let them by stagnating our own development.

Goodness, Veeky Forums. I'm surprised that your reactions are about the same as random chumps on Facebook.

What is more likely?
That matter can be arranged in such a way to perform computations more efficiently than the same matter forming a human brain, or that the human brain can be enhanced in such a way to retain its ontological status as "human" and still be the best possible arrangement?
Also what is so bad about a new species (AI) replacing us as the overlords of earth? If they are actually better than us, what would be the actual difference than "us" becoming better? That they are a continuation of our mentality rather than our DNA? Our natalist instincts? Keep creating copies of ourselves as per our prime directive that if looked at logically is as absurd as the paperclip maximizer's? What is not "you" is the "other" regardless if it's your child or an animated puppet.
Humans only matter to humans. Get over yourselves.
We have evolved to be efficient survivors and procreators within the limited scope of the options given to us by genetic mutations, not efficient truth seekers. If a mutation would affect the former positively, but the latter negatively it would be selected for.

The idea is that, with cybernetic brains, we can rebuild our brains and consciousness like a Ship of Theseus experiment, so that our cybernetic brains are indistinguishable from the hardware used to host AIs.

These cybernetics should allow for us to contain far more than our own minds; which is to say, not only would our own consciousness be housed in our skulls, but compartments of this cyberbrain would host AIs that supplement our minds.

I hear what you're saying. But I wouldn't call the end-game "replacing" humanity, so much as I would call it AI and humanity merging into something that makes the two indistinguishable.

Why would AI want to eliminate us, once we reach a point in which we are indistinguishable?

Sure, we won't be human anymore, but we would still be human descendants. Reproduction is a facet of evolution that I very much admire.

The end of reproduction would be immortality and evolutionary perfection. Perfection is stagnation. Stagnation might as well be death.

I like to think that the technological singularity is a spook. Exponential learning will hit a wall eventually, in which it may take ages until random pattern searching identifies another breakthrough.

The singularity is a misleading term. What we're really looking at is a new plateau, not a black hole.

Your 47 plus my 53 creates a perfect 100. Between the two of us is the right answer, symbolically speaking. I'm okay with being half wrong, as long as someone else's half-right can fit together with my own to form a perfect puzzle.

But lol, my /x/ is showing. I'm just commenting on a neat coincidence and assigning meaning to it.

I mostly agree with your view. My objections are about some technicalities.

>not only would our own consciousness be housed in our skulls
We really can't know that until we solve the hard problem which might be one of the great unknowables. After all our universe doesn't owe it to us to be knowable.

>AI and humanity merging into something that makes the two indistinguishable
I agree, but I think this should be taken to a greater scope. Rather than "humanity", I would go with "biology". "Humanity" has too many connotations and the semantics might give off an air of self imposed limitations. Humans are not the most efficient biological organisms possible. Although it could be argued they are the most efficient extant ones.

>Reproduction is a facet of evolution that I very much admire.
Not a disagreement, but a note: Sex is meta-evolution. Natural selection selecting a mechanism that maximizes the effects of natural selection. Yes, quite beautiful!

>we won't be human anymore, but we would still be human descendants
"We" as in me and you, will be dead. And even if not, I do not think that we can be anything other than what we are without it being considered a very similar but seperate entity. The next entities will be seperate from our consciousness. Be it creatures that we would classify as humans, AIs or a mixture, I think that is a semantic limitation. They will be seperate entities. They will not be us. No matter if they are made of proteins, metal, silicon, both, other....
"human" becomes" meaningless I reckon.

>Also I recommend you read Isaac Asimov's short story: "The Last question", while keeping in mind that computers and satellites did not exist when he wrote that.

>The idea is that, with cybernetic brains, we can rebuild our brains and consciousness like a Ship of Theseus experiment, so that our cybernetic brains are indistinguishable from the hardware used to host AIs.
What's the difference between slowly replacing a human brain with a robot brain and creating that robot brain from scratch after bashing the squishy pink one with a hammer? That the first still counts as a human brain, because we are bad at dealing with continuums?

Politicians wouldn't understand it any more than they understand any other advanced technology.

It's what the non-politicians would do with ASI that I worry about. They don't have to answer to anyone (except maybe shareholders).

I'd be willing to bet you watch computerphile.

I do. What's your point?