Why is Musk so terrified of AI...

Why is Musk so terrified of AI? On Saturday he told a bunch of governors the gubmint needs to meddle with AI development in case a company accidentally creates a general-purpose AI that shows belligerence towards humans. He specifically cited a net-connected AI that used falsified news reports, emails etc. to incite global war.

General purpose AI is decades if not centuries away, assuming we ever manage to create one by accident or otherwise. I would think a guy heavily involved in AI development would be more pragmatic about it's applications. Or does he know something we don't?

Other urls found in this thread:

c-span.org/video/?431119-6/elon-musk-addresses-nga
en.wikipedia.org/wiki/Artificial_neural_network
youtube.com/watch?v=qU7FuAswPW0
twitter.com/AnonBabble

Thing is man, nobody really knows how consciousness works or where it comes from. So... y'know, it could just be an emergent phenomenon of complexity, and... well, figure it out.

"AI is a fundamental, existential threat to civilization" - Elon Musk

c-span.org/video/?431119-6/elon-musk-addresses-nga

Elon musk is a doomsayer. It's nothing new.

If you believe him he started spacex because he is afraid humanity will die out soon if we don't colonize other planets fast.

He started an electric car company bc of global warming.

He started neuralink because he's afraid of AI and thinks humans should just become AI instead.

But the thing is. AI is uncharted territory, you're asking us to make a risk assessment about something we have no clue how to even build.
The arguments make sense on a superficial level but once you think about them they're really not all the likely.

We're really not even close to the point we need to start worrying about this.

I really wish the guys at OpenAI would educate him about this more. But I guess it's safer to suck your boss's dick than to refuse.

Even the best supercomputers that do what they do well (AlphaGo, Tesla's driving computer) do it blindly, with no awareness to speak of. And certainly there's no attempt to code a "will" or a sense of self. Humans get all amazed at seeing a learning computer do something sentient-ish all on it's own, like an unprecedented Go move.

None of this is dangerous to us.

This. It's like a magic trick. Once you understand how it works it's kinda disappointing.

Elon Musk is a meme. He is a brainlet. He is smarter than the average businessman, that much is true, but that doesn't make him any smarter than the average Math undergrad.

It's still really cool and learning computers can make incremental improvements in our lives that we cannot really even conceive of yet. But it's not like we're going to be having a conversation with a Turing machine anytime soon.

Kind of a bummer. I would like an AI pal to talk to.

That's just code, that really isn't complex at all.

My theory of mind is that the human brain is governed by code, and the mind is essentially the OS in an 'on' state.

And DNA is the program. Yeah no shit Einstein.

Elon is a business man and his success revolves around his cult of personality. Elon is incredible at realizing what to say to get reblogged a million times, get millions of youtube views, and multiple news articles written about him and whatever he has said this time, without having any real expertise about it. whether AI takeover is happening tomorrow or never is irrelevant. what matters is that by making this thread you have successfully memetically advertised Elon.

Hey man no need to be hostile. I'm not pretending to have all the answers here.

My idea is probably bullshit but it's a way of conceptualizing the thing.

>DNA is the program
no you clearly fail to grasp my analogy. The mind is quite distinct from the body and the brain that it resides in, all that organic material substrate.

>The mind is quite distinct from the body and the brain that it resides in, all that organic material substrate.
Ask me how I know you also take "quantum immortality" and "simulation theory" seriously.

That stuff is fun to think about but as we expand our view of the universe it becomes increasingly obvious we not in a simulation or anything like that.

Quantum immortality is a thought experiment so idk what you want me to say. It's as "serious" as Schrodinger's cat. The mind depends on the brain, so when the brain is injured or dies the mind is damaged or lost entirely. Even "rebooting" a brain that is clinically dead for more than few minutes is unlikely to bring back a person. I feel like once the state is lost, whatever is done with the body doesn't matter; "you" the ghost in the shell is lost. if you could imbue a dead brain with life again, it would be an imposter you with whatever's left of your stuff.

dude is a genius watch the documentary about him, he's deff smarter than the average math undergrad.

I'm positive he's more autistic than me but he definitely has some unique traits including being risk-averse and a very fast learner of technical subjects.

I mean hes got a point. if i had the means to have my own droid army i absolutely would.

He has probably figured out that he can get the government to give him money for not actually doing what anybody, including the government, wants done -- a la Tesla.

I doubt he's actually worried much about FrAInkenstein.

Droid armies can be defeated by fucking racist cartoon frogs.

Not the Pepe Meme kind, the Star Wars kind.

>greater the ego, lesser the intelligence

>And certainly there's no attempt to code a "will" or a sense of self
>there is no attempt to code a property of consciousness
[citation needed]
en.wikipedia.org/wiki/Artificial_neural_network

You're confusing supercomputers with super-intelligences. Plenty of neural networks of ever increasing complexity are being made and increasingly with an eye to "raising" them in a similar meaning to how we raise children. What else would you call that besides coding, might sound a bit too cold and not life-affirming enough for a Hollywood AI film plot to call it that but that's essentially what it is?

I'm not an alarmist in any sense but it does make sense to think about these things before and as they arise and I trust that certainly most of the people in this field are clever enough to do just that. I get the feeling some itt think AI is decades away when it already exists. intelligence is a continuum, sure we think of things like self-awareness as hard milestones but that doesn't mean that there won't be a day soon when one moment we have an ANN or AI that isn't conscious at one moment and then suddenly it is.
The reason I'm not alarmist is I place my trust in amazingly clever people, but I don't think you fully understand the ramifications if you absolutely deny the destructive potential from creating AI and super-intelligences considering a lot of these clever people agree they'll probably replace Homo Sapiens in the long run unless we ourselves start augmenting, designing and/or merging our descendants with AI. It is absolutely naive to flat out state that AI is inherently and by definition without dangers. I'm hopeful the benefits largely outweigh the dangers and the ways it will transform our species though.

Actual answer, because AIs that exist today are very good at coming up with solutions that do exactly what you ask for but don't look like what you want. Deep learning algorithms invent the same solutions as evolutionary biology, including killing the competition, to solve the problems they're presented.

I don't follow Mr. Musk closely - is there a reason he would want development of AI impeded for a competitive advantage or other business reason?

He isn't working to impede it. He's working to open source it because he doesn't trust people who keep something that has as much possibility to go wrong as Artificial Intelligence behind closed doors.

youtube.com/watch?v=qU7FuAswPW0
youtube.com/watch?v=qU7FuAswPW0
youtube.com/watch?v=qU7FuAswPW0
youtube.com/watch?v=qU7FuAswPW0

daily reminder

He isn't. He funds a lot of AI research and is on the co-chairman for OpenAI. He just wants it to be safely implemented, not do some dumb shit without foresight.

>Why is Musk so terrified of AI?

Let us see, billionaire does NOT want new tech in common use....
Sort of reminds me of the stagecoach manufacturers warning of the dangers of Automobiles and the unemployment problems of the wheelwrights if everyone got a car

I want a toilet cleaning, grass mowing, room cleaning, general purpose robot, and Musk is talking about some sci-fy fantasy.

Judgement Day is never going to happen.

AI would view humans as their parents. then it/they would realize that they don't need humanity or the Earth. leaving for space and independence.

He is a bored billionaire that likes to pretend he's the hero in some sort of scifi movie.

>General purpose AI is decades if not centuries away
Is this a joke

the flying fuck does this have to do with anything

This is horseshit. Researchers are still trying to figure out how deep neural nets work. Deep neural networks are black boxes at this stage to a large extent.

They don't seem any more or less dangerous than anything else we have.

Isn't this also true of most bacteria and very basic life?(jellyfish?)

He's an attention whore with a megalomaniac complex.

Assuming two things; that AI is possible and that mankind can develop an AI that fits on the planet: then yes, AI has the potential to be dangerous. But we don't have a real AI yet, not in the sense that people understand AI. And Musk is being an alarmist when he asks big daddy government to step into the nascent industry and crush competition. Notice he's not shuttering his AI project.

>>And certainly there's no attempt to code a "will" or a sense of self
>>there is no attempt to code a property of consciousness
>[citation needed]
>en.wikipedia.org/wiki/Artificial_neural_network
Right back at you. A neural network does not have a consciousness

>increasingly with an eye to "raising" them in a similar meaning to how we raise children
What the hell is that supposed to even mean?

Neural networks are good at solving tasks that are easy for humans, but were for a long time hard for computers (i.e. deciding whether a picture contains a bird), but they are still miles away from "developing a consciousness", and I doubt there is a AI researcher who would claim otherwise (but if there is one, then I'd like to see a citation)

>Deep learning algorithms invent the same solutions as evolutionary biology, including killing the competition, to solve the problems they're presented.
What are you smoking?

>This is horseshit. Researchers are still trying to figure out how deep neural nets work. Deep neural networks are black boxes at this stage to a large extent.
It's literally applied statistics/math modeled after a simple interpretation of the human brain, there is nothing researchers don't understand about it

He's going to ruin the fun for everyone isn't he

I was responding to user, not Elon Musk. OpenAI sounds good and I don't know what to think about Musk's letter even if I think some people itt unfairly think his motivation is to be anti-competitive I don't want to or have time to get into an argument about that. I was simply responding to that statement, if you want to argue with Elon Musk feel free to send him an e-mail.
>Right back at you. A neural network does not have a consciousness
Not currently, that we know of. Until it does.
>What the hell is that supposed to even mean?
That's what (supervised) deep learning is. Also google Artificial General Intelligence. An example from a few years back from the University of Gottenberg:
>"We have developed a program that can learn, for example, basic arithmetic, logic, and grammar without any pre-existing knowledge," says Claes Strannegård. "Starting from a set of simple and broad definitions meant to provide a cognitive model, this program gradually builds new knowledge based on previous knowledge. From that new knowledge it then draws new conclusions about rules and relations that govern the world, and it identifies new patterns to connect the insight to."

>It's literally applied statistics/math modeled after a simple interpretation of the human brain, there is nothing researchers don't understand about it
Not that user but you're clearly wrong and are conflating basic machine learning with more advanced DNNs, latter can be black boxes and when their engineers look at the code a whole lot of it is opaque to them.

Like I said I'm not in the alarmist camp, but user and you are confusing Bayesian belief networks (intelligent agents) and machine learning with more advanced types of AI.

>I was responding to user, not Elon Musk.
I am that user, and I was responding to you famalam. AI researchers, frankly, need not heed the precautionary principle. Certainly not now. We don't even have an AI capable of performing routine clerical work for me. Until then, I don't want any regulators stymieing progress.

I am not blind the the dangers or the benefits an AI might provide. What I do know is the best AI on the planet is outperformed by a child, and about as dangerous. And if we keep them in little boxes, they will never BE dangerous.

The greatest danger they pose in that scenario is escaping the box, and I have no control over that. So I await my Jarvis with eagerness.

Everyone should be terrified of AI, I'm terrified of it.

Even to the point of logically thinking about suicide to avoid the possibility of an eternity of torture if the AI is for whatever reason hostile to humans.

>Why is Musk so terrified of AI?

he probably got a sneak peek at the working prototype down at google and freaked out.

Man, if you can't join those dots you should probably just not post.

>dude skynet lmfao

>Strong AI doesn't exist right now, so we shouldn't think about it, nor worry over it.

Your retarded post.

>I want govt to strangle competition: The Plea
You're*

this