Is AI superintellgence safety actually an issue or just something people on LessWrong jerks themselves to

Is AI superintellgence safety actually an issue or just something people on LessWrong jerks themselves to

No. A* isn't magically going to become self aware and kill you.

The problem is it's something that if we fuck up once we're screwed.

How come bacteria evolved into being self aware people then ?

Can I mention the fact that it took billions of years to do so ?

but A* can kill you without being self-aware

Yes.

The roko's Basilisk shit is sci fi tier but unemployment is a real issue.

In a simulation you can speed up things

Was unemployment an issue in slave using civilizations ?

have you seen Terminator?

because that's how you get terminators.

Yes. For example, the OpenAI bot running right now actually has the ability to hurt people. It might take sometime, but eventually the genetic algorithm will randomly true a series of inputs that involves alt-tabbing and attempting to hack their opponent.

Whats the next best thing after that? Well stuxnet has already proved that its possible to cause real damage iwth just a virus.

Again, this may sound far fetched, but it happening is actually mathematically inevitable

>the OpenAI bot running right now actually has the ability to hurt people
U fuckin wot

Yes is very dangerous an AI developer could create an algorithm such as:

while True:
shoot(humans);

And could have devastating consequences

No, but the slaves were human. When the slaves become machines, what use is there for humans anymore? Our rulers will just let us starve to death.

Lol

>Fearing that matrix operations will steal your job

L M A O

Eliezer Yudkowsky is a crank.

Just read some of his earlier posts to realize the level of his delusion. E.g., claims that he is psychologically without self-interest; claims that he sees a path to create an AI superintelligence beginning with his new programming language (which never materialized, despite almost two decades passing); claims that he thinks human biological immortality would not only be achieved in his lifetime, but that it would be achieved before his grandparents were deceased; the projection that (from the year 2000) it would take him about 5 years of research to build a real AI. And of course, the "subtle" self praise he laces into every single LW post. All of this in combination with the fact that he has never produced any public code whatsoever, nor published a technical article, despite being a "Senior Research Fellow" at his institute for his entire adult life, and that he has no formal education whatsoever.

The real answer to this question is that no amount of intelligence will make an NP-hard problem no longer NP-hard--superintelligence will not be able to find efficient solutions to a large class of problems. There will be no intelligence explosion.

Re: That image.

Owls are actually fairly dimwitted -- so much of their skull is taken up with large eyes and a lot of ear structures, they have little room for brains.

t. falconer

>Our rulers will just let us starve to death
This is how you incite revolutions. No, if you live a developed nation then you will have a basic income package available, among other things already present.

Can i mention the fact that biological evolution is caused by random mutations permutating over several lifetimes and machine evolution will be caused by intelligent improvements to previous designs permutating over the span of days? If a strong AI is made that can outperform humans at desging AI, we are going to hit the limit of whats physically possible, technology wise, pretty fucking quick.