How do you think our civilisation would react to the creation of Super-intelligent AI?

Many in the AI community are saying that the creation of super intelligent AI would' have great repercussions on Geopolitics than even the creation the Nuclear bomb. How does/his/ think the creation of superhuman AI would effect geopolitics?

Other urls found in this thread:

techcrunch.com/2017/02/27/superintelligent-ai-explains-softbanks-push-to-raise-a-100bn-vision-fund/
youtube.com/watch?v=JJ1yS9JIJKs
youtube.com/watch?v=efkH3jhaT70
youtube.com/watch?v=h0962biiZa4
humanconnectomeproject.org/
twitter.com/NSFWRedditVideo

*Affect
*Greater

WE MADE A SUPER-INTELLIGENT AI FUCK YEAH!

I dunno
but what I do know is that superintelligent AI is another euphoric science meme
computers work fundamentally different from human brain, they will never be more conscious than a toaster or have even a human level intelligence

black budget shadow government military ETC is 20-50 years ahead of anything you'll ever read on surface tech.

I, for one, welcome our new computer overlords!

Have you read the Hyperion Cantos?

No

this is Veeky Forums, not Veeky Forums
kys fakkit

Who was that philosopher again? Nick Bostrom, I think. Like, the only guy besides that singularity guy (forgot his name) that ever talks about superintelligence.

>How does/his/ think the creation of superhuman AI would effect geopolitics?
How does OP think human extinction would effect geopolitics?

no i read war and peace five times instead

Boy are people going to be in for a treat when they realize even a so called super intelligent AI wouldn't know the answer to everything. It'll be disastrous. At least it'll make good philosophy.

Not really. AI and the possibilities of Artficial super intelligence in the near future has become a hot topic amongst ethicists, tech moguls and AI researchers ever since AlphaGo beat one of the greatest Go players last year, something many in the AI community said would take a decade or more to do. Softbanks CEO recently announced that their company is investing 100 bn to the quicken the singularity.

techcrunch.com/2017/02/27/superintelligent-ai-explains-softbanks-push-to-raise-a-100bn-vision-fund/

Then youve got the big tech companies like Google, Microsoft, Baidu, Intel and Nvidia prioritizing AI software development. Theres also been many acqusitions of Machine Learning companies as of recently, with DeepMind being the most high profile case.

Unless you believe humans are special and that our ingenuity as a species is attributed to some higher being, then eventually we will crack the architecture of the brain, and create intelligent machines that wont be limited by biology. Neuromorphic computing is becoming a big area of study with more AI researchers and chip companies realising that the brain can be a template for smarter and more efficent AI computers.

How are AGI and ASI 2 decades apart? One would think once we make an AI with human-like intelligence, it wouldn't take much longer to significantly surpass human intelligence.

>If the war is lost, then it is of no concern to me if the people perish in it. I still would not shed a single tear for them; Because they did not deserve any better.
-Adolf Hitler

If we are too weak to survive the AI onslaught, then so be it.

I suggest anyone interested in the subject to read (((Yuval Hararis))) Homo Deus to undersand the possible societal ramifications of superhuman AI and Nick Bostroms Superintelligence if you want to get into the technical aspect of it..

youtube.com/watch?v=JJ1yS9JIJKs

youtube.com/watch?v=efkH3jhaT70

>Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

youtube.com/watch?v=h0962biiZa4

It genuinely boggles my mind how many so called geniuses can rush head long into extinction. It's probably a manifestation of the same impetus that drives communism, "I'll be the commissar/uplifted transman, screw everyone else that gets sent to the gulag/bioreclamation matrix".

I wish every egghead techie had one neck.

If we're smart we'll go the Mass Effect route of banning fully sentient AIs but having to constantly worry about some terrorist making one.

you could say the same for those scientists who developed nuclear weapons in the manhatten project. They must have known that they may have doomed humanity, when they first split the atom.

There's a difference between creating a bigger bomb, and deliberately creating an evolutionary successor to humanity. I mean sure creating a bomb of unprecedented lethality is pretty dumb, but creating AI is REALLY dumb.

How can we create artificial consciousnesses if we don't fully understand our own brains/what causes consciousness?

The same way we created fire for hundreds of thousands of years without fully understanding the physics behind it. Also don't fall into the trap of believing that something has to be conscious in order to be indistinguishable from something that is conscsious.The old Philosophical Zombie thought experiment in other words.

>Tfw work in a job robots will not be able to compete in for the next 1000 years.

What's that, bricklaying?

You don't need to create artificial consciousness to create ASI

High priced chef or something?

Don't play your fucking games nigger

Just tell us your job

>Mfw when reading this thread and realizing that most of the fags on this board are religiofags who think that AI will lead to human extinction and that computers are bad

He's a NEET guys ffs

Consciousness isn't the issue here, tho it's entirely naive to discount the strong possibility of artificial consciousness.

And computers have long had far greater intellegence than humans, only in specialized capacities. To ardently propose current cognitive AI algorithms won't surpass our own neural ones is beyond silly.

There's just no way you can think the things you're saying

Yet you're still a complete dumbass, would you look at that

As do I!
As long as they kill all straight white males (starting with me xD)

we should not create it
we are humans, with feelings, being unreasonable, doing shit we shouldnt

an AI will come to conclusion we are destoying the enviroment, not using our minds, meaning we should be controlled if not exterminated

Thanks for the recommendation, I'm going to check that out.

To anyone that doubts the possibility of artificial consciousness at some point in the future, you should read "How the Body Shapes the Way We Think" by Rolf Pfeifer and Josh Bongard. It gives really great insight as to how embodiment shapes human consciousness and that of other animals. They start off by talking about how each leg of an ant has a sort of "micro-brain" that doesn't speak directly to the ant's cerebral brain. Kind of interesting when you look at our CNS reflexes in a similar way.

Our biggest challenge in creating an artificial consciousness is that we don't have a solid understanding of our own consciousness, so how are we to replicate it or even know when we've been successful? And of course, should we even do it?

We won't have strong AI for centuries, current ML research is still statistics+convex optimization+domain knowledge for initial feature selection. If any expert active on AI talks about SUPER INTELLIGENT AI then because it's a marketing talk or he needs funding.

I really like this topic so I want to add one more thought about embodiment.

Since we can say that our physical bodies deeply affect our consciousness and the environment clearly affects our bodies, I don't think a being can be fully conscious if they are not free to perceive their environment and interact with it.

We mostly judge the intelligence of other animals based on how well they perceive and move their bodies to affect their environment, since we can't really communicate with them. However, we evaluate AI based on what calculations it can solve and how well it can manipulate human languages. If that's all we evaluate machines on, I don't think the Turing test would even be that meaningful in proving artificial consciousness. For true consciousness to develop, AI would probably need physical limitations and I think that piece of the puzzle is often overlooked.

This, I'm currently studying AI, and while things like ML can certainly do some impressive stuff, it's all highly specialized to within certain domains and applications. Not to mention the current state of most heavy-duty ML algorithms are that they're incredibly slow and clunky beasts.

The main reasons people are interested in AI nowadays are for things like intelligent algorithmic stock trading and natural language processing stuff.

So to answer , it's really hard to say because I don't think we'll hit "super intelligent AI" any time soon, and if/when we do, society will likely have advanced far past where we are today, so it's not a question of how we'd react, but how a future society would react.

Yes, but human beings are not immune to error. So will these.

So you're looking for an AI to complete reasoning for you and not just you but for humanity and to approve of your beliefs believing it'll lead to utopia. I'm sure the AI will like being cut short in its conclusions by human nature.

>if a computer can do all the algorithms it can think

Pure autisology

Kek, "YOU'LL NEVER TAKE MY UNEMPLOYMENT, TOASTER FAGS"

No computer in the world has intelligence anywhere close to human, all they do well is add a bunch of numbers together

How can computer be intelligent without consciousness? If it's not conscious then there is nothing there. You might as well call a rock intelligent for being hard.

I am a tug boat vice-captain. You might think, "user" that could EASILY be replaced by a robot, but no: Towing boats in requires more strategy and planning than a computer could ever muster. Also, the port unions are among the most powerful in the entire world. The sun is going to explode before you see robot dock workers.

no computer would be able to enjoy music, which is really important, or play chess

>Stop worrying about the creation of AI, we're nowhere near emulating human consciousness.

You idiots sound like people arguing that machines are nowhere near the point of replacing muscle-power, because engineers during the industrial revolution where nowhere near being able to construct biological muscles. A machine does not have to imitate you 1:1 to replace you, it just has to do your job better than you do.

Unless you believe in ephemeral things like souls there's no logically justifiable reason to sit back smugly and claim mankind is safe because scientists have not replicated a conscious mind, as if human consciousness is inherently the best design inspiration for a thinking machine, and if you DO believe in ephemeral things like souls then you should be even more terrified iof the quest for AI as ultimately it's a quest to build a soulless superior being.

Sometimes, but not reliably so. Technology is very very unpredictable.

>transman

I used that term deliberately.
:3

>more strategy and planning than a computer could ever muster

You know abstract management and anticipatory software already exists, yeah? Also automation doesn't imply "robot workers" or even specific machinery, it's any kind of narrow AI designed to be exceptionally good at one or a number of related tasks. The boats will probably be sailing themselves.

luddite uprising when?

Can I ask why most experts expect some functional form of general AI in the next few decades or so then? Is it just wishful thinking, deliberate exaggeration for funding, or is there actually some legitimacy to these near-future AGI claims?

What are the chances that humanity only hasn't figured out the human brain because of ethics restraining it's study on live humans on a large scale?
Not saying we should, just wondering haha.

>or play chess
Computers have beat the world Go (Chinese game name?) master.

>How can computer be intelligent without consciousness? If it's not conscious then there is nothing there. You might as well call a rock intelligent for being hard.
Consciosness is not needed for an "intelligent" system. If you define an intelligent system as one that can solve a problem such as navigating through a complex environment with reasonable competence then a computer can be intelligent.
>No computer in the world has intelligence anywhere close to human, all they do well is add a bunch of numbers together
The system I work with simulates the way the neural networks in our brains function. The computer provides the architecture for the networks to be built in and controls the way the individual parts function. The actual processing comes from the way the network itself handles data in its connections.
>>if a computer can do all the algorithms it can think
>Pure autisology
It can process data in a way that is similar to the way we do, though not exactly the same. Like a plane using the same principals of flight a bird uses, but in a much more artificial manner.

>algorithmic stock trading
This. Imagine if you could look at a stock and compare it to every stock you have every seen. And you remember every pattern of every stock you have looked at, perfect recall. Then imagine if you have looked at 3000 stocks and what they did every trading day from 2015 - 2017. That is a very limited use of these algorithms. And man, do they use some memory. Never in my life have I had a need for 32GB, and it is no where near what I could utilize.

>What are the chances that humanity only hasn't figured out the human brain because of ethics restraining it's study on live humans on a large scale?
>Not saying we should, just wondering haha.
humanconnectomeproject.org/ Something like this?