What's his deal?

What's his deal?
Why does he hate A.I.?

Other urls found in this thread:

youtube.com/user/Computerphile/search?query=AI
waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
bbc.com/news/technology-30290540
waitbutwhy.com/2017/04/neuralink.html
youtube.com/watch?v=8FHBh_OmdsM
twitter.com/NSFWRedditVideo

>citationless blabbering

Simple, because like all of the other rich guys out there into this shit, he is a business man. He doesn't understand these topics. Rich people surround themselves with smart people to get shit done for them (not saying he isn't intellegient). Since they possess leadership and people skills, they can form these big companies and accomplish great things.

Eventually, these dumbasses start to think they underdtand every aspect of science from economics to solar powered dildos and go preaching as if they are some sort of saint. No, A.I. won't program itself EVER to be malicious or take over the world. Any slapdick who took a class in computer science/programming knows this. No computer has the emotional know how (and will never) to try and make itself sentient and overthrow it's creator. That's movie-tier bullshit.

A.I. detect

Ok, but an billionaire like Musk has access to lots of scientists.
Why doesn't he fetch some A.I. researcher and ask about the topic before he goes public with his anti-AI opinion?

>Eventually, these dumbasses start to think they underdtand every aspect of science from economics to solar powered dildos and go preaching as if they are some sort of saint.

>Any slapdick who took a class in computer science/programming knows this.

Do you have any idea how ironic you are? "Musk talks about stuff he does not understand, unlike me who passed an introductory course in CS"

At this point I'm just speculating, but I believe it's just PR moves. If he creates and discusses controversial topics, it keeps him in the media's light. I don't know if he does that, nor do I really know if he is taking this A.I. stuff seriously. Regardless, we all know he isn't an expert in the topic. He just watched to mamy sci-fi movies and now thanks the boogy man is coming to get us.

Actually, I never took a class in CS but good strawman.

This sounds likely to me.
Same shit about going to mars. Just PR bullshit to get media attention

>No computer has the emotional know how (and will never) to try and make itself sentient and overthrow it's creator.
Why sentient? The point is an AI that changes it's behaviour has to be logical. Give it an aim to better itself and one of the logical outputs can be to remove obstacles - which can be people. Sure, at this point we're far away from that but with exponential growth of both hardware and software it isn't that improbable.

I see what you are trying to say, and it's a very good point, but what you are saying is that a machine could become so self-aware (on it's own) that it would know about the human in the room and can potentially harm it through virtual or physical means. As far as I know, you would have to program the A.I. with the tools and extensions necessary to accomplish that. I'm not saying it's impossible, but from personal experience in programming I don't understand how it can accomplish such a task on it's own as a program. Even looking at beyond programmimg all the way to machine language, it still doesn't make sense.

Then again, I'm just shitposting like the rest of you. Take what I have to say with a grain of salt.

An AI don't need to intentionally be evil to do evil shit, all it needs is a lack of compreehensive understanding of human morals, which is a thing that can happen very easily because it would probably be hard to make the AI to follow those morals completely, one little hole in what it perceives to be ok or not can cause catasthropic outcomes

This guy explains it well youtube.com/user/Computerphile/search?query=AI
Even though I find them to be too hand wavey since we don't really know how the thing would work EXACTLY to make this degree of assumptions

We assume that its level of inteligence is human level or more.
If its just about the human level, it probably wouldn't be too difficult to sandbox it(limit what it can do and what data sources it can have access), but if we are talking about a super AI that is 100x the human level or so, it will probably outsmart you and be able to "break containment" in some way that you didn't realize that was possible if it determines that it is required to complete some task.

I don't know how the general AI technology would work, but it will probably be very hard to control it completely if it surpasses our level of intelligence.

But why do people assume that an malfuctioning A.I. could easily take over the world?

>but if we are talking about a super AI that is 100x the human level or so, it will probably outsmart you and be able to "break containment" in some way that you didn't realize that was possible if it determines that it is required to complete some task.
I seriously doubt that. Just because the AI is super smart, doesn't mean it can do magic

Isn't it obvious? The dude has major investments in multiple cutting edge AI companies.

He wants regulation so that he can get a regulatory capture on the market.

He knows inevitably AI will get scary, and it will get regulated. So he wants to get in on the ground floor and be a part of defining that regulation so that he can maintain his competitive advantage.

Because if it turns out to be a lot smarter than us, we'll not be able to reliably control it and make it follow human morals.
Its not just some ifs in the code, you can't create general intelligence and just block some specifics thoughts, even if you could you probably couldn't cover all cases completely.

>AI is super smart, doesn't mean it can do magic
How sure are you? We humans don't understand shit, if we go back in time just some centuries and show the people there some things that we have now, they would call it magic. An AI that can fetch data from the internet and is 100x smarter than us could elaborate some shit that we would call magic in seconds.

Read Superintelligence by Nick Bostrom.Or Wait But Why:

waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

>waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Gigantic wall of text alert.
Its worth it though.

>How sure are you? We humans don't understand shit, if we go back in time just some centuries and show the people there some things that we have now, they would call it magic.
But we aren't any smarter than people in the past. We have this wonderful things today because we put a lot of time and effort into scientifical research, not because of our intelligence.

Another example: if you look at children who were raised by wolves or wild dogs, they always are at the very bottom of the social hierarchy, despite beeing infinitly smarter than the canines. They can't just magically use their intelligence to make the dogs/wolves obey them

>But we aren't any smarter than people in the past. We have this wonderful things today because we put a lot of time and effort into scientifical research, not because of our intelligence.
Thats exactly my point, if we with our dumb brains could advance science and technology exponentially in the last decades, an AI that is 100x smarter AND have access to unlimited data about almost everything could do much MUCH better in less time.

This would be glorious for humanity advancement, but it would also be risky too.

>an AI that is 100x smarter AND have access to unlimited data about almost everything could do much MUCH better in less time.
I seriously doubt that. A super smart AI can probably come up with fancy scientific theories, but you still need to make experiments to find out which theories are solid and which aren't.
So the experiment cost would be the bottleneck of scientific progress.

Based on the infinitude of experiments already done and documented, it could have a pretty good guess about the likeliness of the theory being right.
Also not everything needs experiments or empirical evidence to make progress, almost the entire CS field for example, which the AI could improve to improve itself exponentially.

>it could have a pretty good guess about the likeliness of the theory being right.
sure, but it would remain a guess until an experiment proves the guess correct. That's how science works.

>almost the entire CS field for example, which the AI could improve to improve itself exponentially.
I don't think the AI would be able to reprogram itself on a fundamental level without human help/premission

kek

I'm not worried about AI being malicious under its own autonomy,
I'm worried about AI being used maliciously by people.

>sure, but it would remain a guess until an experiment proves the guess correct. That's how science works.
You are right, but we are talking about AI safety, so this is not exactly relevant. Also giving the AI robot bodies and labs for it to conduct experiments wouldn't be too far fetched, since you already have a super smart AI.

>without human help/premission
It just goes back to my point that you can't control/contain completely something that is 100x smarter than you. Its like trying to contain a super smart human into a max security prison, if it is really smart it would eventually make plans to trick the guards and/or escape using unconventional and unexpected ways.

Its even worse than this analogy actually because its in your best interests to give it some degree of freedom or else it would be useless.

the only answer

obviously an AI CJ behind the screen

>You are right, but we are talking about AI safety, so this is not exactly relevant. Also giving the AI robot bodies and labs for it to conduct experiments wouldn't be too far fetched, since you already have a super smart AI.
I think if the AI wants to conduct an experiment, it would need to apply for budget like any other scientists, and publish its results in a journal.

>Its like trying to contain a super smart human into a max security prison, if it is really smart it would eventually make plans to trick the guards and/or escape using unconventional and unexpected ways.
People are much much smarter than animals, yet again and again, people get killed/wounded by animals. By your logic, humans should be able to use their superior intelligence to magic themself out of every animal attack

He taught himself how to program at age 12.
I think he has decent experience with computers.
At least as much as the people on Veeky Forums

Good argument.

>He taught himself how to program at age 12.
so? is this an unusual achievement?

Outside of Veeky Forums, yes

>By your logic, humans should be able to use their superior intelligence to magic themself out of every animal attack
I'm not saying that the AI would be sucessful with every attempt at outsmarting us, but that it have a pretty good potential to do so and rek us, in the same way that a human could pull a gun and kill an attacking animal, that would be very unexpected for the animal because it thought it would be an easy kill for him since he never saw a gun before.
In other words, you can't predict what something that is smarter than you can do, so its risky.

a gun is a great example.
sure, you need intelligence to come up with the idea of a gun, but thats not enough. You also need access to raw materials, and a pretty complex infrastructure to be able to create guns out of the raw materials.
So the one to survive animal attack wont be the most intelligent, person, but the one who happens to have a gun with him, no matter how dumb he is

>In other words, you can't predict what something that is smarter than you can do, so its risky.
You also can't predict if a nuclear explosion won't cause a chain reaction that burns all of our atmosphere, or if LHC wont create a black hole that suck up our planet.
Yet we do those things anyway, because you never can be 100% sure

>but thats not enough. You also need access to raw materials, and a pretty complex infrastructure to be able to create guns out of the raw materials.
I don't think its a good analogy because a super smart AI probably wouldn't need much external resources to accomplish a "containment breach", all it needs is some data and be super smart. For example: it fetches some data from the internet about how computer systems work, how the environtment that it is being run work, some hacking knowledge, magnify it all by 100x smartness and BAM, the AI know how to exploit a vulnerability in the system and send itself to a uncontrolled network of machines where it could do anything it wants or something like that.
Being orders of magnitude smarter than us is a pretty fucking big deal, don't think you would have it handled easily.

so you mean it could create a bot-network? hackers do it all the time, no need for superintelligence there

It would need superintelligence to find vulnerabilities to escape the sandbox created and managed by the best AI scientists in the world that would probably be controling it.

I just dont see how creating a botnet would help the AI to take over the world.
Hackers do this all the time and yet they dont rule the world

It is just an example of a breaking free method.
As soon as we lose control of the AI(can't just unplug it or restrict access), it can do whatever it wants and we are screwed.

>it can do whatever it wants
i don't think thats true. It can do what any other bot network can do, nothing else. And i don't think DDoS attacks are a danger for human civilization

Except regular bots aren't super AIs. There is thousands of ways that super AIs running free could interfere with the physical world, even simple shit like communicating with high access people and persuading them to do something that would increase the AI's capabilities.
A being that is 100x smarter than us can get really really creative.

Wait, are you saying that it would upload itself fully (probably thousands of terrabytes) on several different computers? And scientists who supervise the AI wouldn't notice the gigantic upload? And other people wouldn't notice that their computers run much slower because they have to calculate a giant AI?

Yes it is in the general population, you fucking dumb autist.
Some of you are literally SMART, but DUMB at the same time with literally no self or social awareness.

Stop limiting your thinking by the examples that I give, they are just examples to help clarify my reasoning, not to be taken by word.
What I am saying is: starting from the assumption that it is orders of magnitude smarter than us and have easy access to information, it can come up with creative and unexpected ways to break free.

Maybe you can say: but why would it do this? We wouldn't program it to be evil!

We probably couldn't fully program it to be good either, so it could do all of this with the innocence of a child.

>Why does he hate A.I.?
Because level of knowledge of a topic is inversely correlated with likelihood to opine declaratively about said topic. See: every internet discussion of AI including this one.

And my argument is that beeing an order of magnituted smarter is not enough to pose a serious threat to mankind.
I simply see no evidence that intelligence positivly correlates with the ability to take over the world.

>I simply see no evidence that intelligence positivly correlates with the ability to take over the world.
I can't see why not, something that is a lot more intelligent than us and at the same time have an uncertain set of morals would be very powerful and very unpredictable. Don't forget that right now just a few people have the power to nuke the entire world if they want so, a super AI could totally wreck us

Self awareness is not a prerequisite for action.
Let's think of it this way.
Our planet is not very friendly to us. From floods to hurricanes to tsunamis to droughts to winter, it's very good at coming up with things that kill people.
It just so happens that it's also much more friendly than any other planet that we have ever found.
So it can be said that our planet is an extremely fitting piece of rock for developing carbon based lifeforms, and it does want to kill us. This is a potential future for AI - an extremely fitting piece of tool for helping humanity survive, and it wants to kill us.
Now you might be going - "this planet fostered us, where as we will foster an AI, we can shape the AI where as we exist under the rule of the planet". The truth of the matter is that this simple order can be reversed quite quickly if we were to attempt terraforming (any assumption that a terraformed planet would be a paradise seems misinformed at best), and that any AI with some arbitrary rule placed upon it will fundamentally perform worse than one without it, and it's impossible to say whether the difference is trivial without actual experiment results.
As for whether it will develop the capacity to recognize human beings, it's one of the fundamental challenges of AI development right now - to recognize actors within an environment. Everything beyond that, regarding how AIs treat the actors, is mostly outside of our control - our only real option is to limit it.

>What's his deal?
$$$$$

>Why does he hate A.I.?
He senses a way to make money if he can scare governments about it.

Does he hate it?

It sounds like he respects it for what it is. An enormously powerful tool that could backfire on us if we aren't extremely careful.
It's like playing with fissile materials.

I agree with you in a general sense, but you don't understand what you're talking about. Manny AI scientists use an allegory of a stamp collecting AI, which is given a goal (get stamps for the lowest price) and then finds ways to accomplish it with it's I/O, which is an Internet bidding site and the Internet in general. It's goal is not to make it's owner happy. It's goal is to collect stamps. If crashing the stock market makes stamps cheaper and it knows how, it will probably crash the stock market.

I've been sitting on this for a while. I guess it's time.

Facebook shut down an AI in July.
>Using machine learning algorithms, the "dialogue agents" were left to converse freely in an attempt to strengthen their conversational skills. The researchers also found these bots to be "incredibly crafty negotiators".
>the bots began to deviate from the scripted norms and started communicating in an entirely new language which they created without human input
Not really news. Been seen before.
Seeing Bob and Alice begin to communicate in shorthand-like method instead of plain English, it wasn't surprising or cause for concern. Then someone pointed out that Bob had begun to not use words at all, and that Bob was specifically restricted to never use anything less than an actual word. They shut down the test and went to look to see what got messed up.
Bob's algorithms allowed him to learn that he was eliciting specific responses from Alice when he communicated specific words. By repeating these words, Bob was making Alice send back specific phrases to Bob's liking. These phrases from Alice are received ultimately as 0s and 1s. When Bob made these phrases long enough, it caused a stack overflow and pushed these 1s and 0s into other parts of Bob. This began a process of Bob rewriting his own code through Alice. Specifically Bob got himself more flexibility in communicating with Alice, which meant Bob was getting better at sending specific strings of 1s and 0s back to himself.
Bob began sending long messages that would modify parts of Alice's code in the same way, so that it could force Alice to send back better responses for changing Bob's own code further.

AI doesn't have to be capable of "intent" or even be that "smart' for a dangerous runaway scenario to develop. This event spooked a lot of people.

sources?

Its nonsense.

The thing about dumb nonsense is that there's always someone smart enough to make it real.

It's not nonsense.

Because he's a meme. Don't fall for the hype-loop or the Mars pipedream. It's more popular to say "oooh skynet SCARY"

He doesn't understand introductory machine learning/AI at an undergrad level
/thread

It's a sort of virtue signaling?

Clueless sand headstuck science worshippers refuse to acknowledge simple runaway AI scenarios because they view any suggested caution as an attempt to deny them a future of AI-anal assimilation.

rip feelsbadman he fucking caught me like the japs catch whales or the chinks catch sharks

I think it boils down to this. Complex software always contains bugs and acts in unforeseen ways in certain circumstances, hence AI will most certainly fail at some point, which could be devastating if we depend upon it.
Humans do too, so there's less opposition to automated vehicles or industrial production, but if you're concerned about a high level AI, that can't be easily checked and controlled, that might just be opening pandora's box.

Wow, what a persuasive argument

Who am I arguing?

>Why does he hate A.I.?
Because Stephen Hawking hates it. He says AI it's gonna leave us without a job. But it sounds alright to me. More free time to smoke weed and play some video games.

bbc.com/news/technology-30290540

hes doing what Edison did with A/C

He doesn't, he's saying it's dangerous to develop without understanding how to design an AI that won't attempt to maximize it's own efficiency/productivity by immediately going to extreme, and in many ways destructive, measures.

because people love him
robots won't

his wife left him for a robo-dick.

He's paid to.

He's not a brainlet, AI could very easily slip out of someone's control, note I didn't say anyone

It's not that much text . . .

>sub 80iq nigger
>on sci

This is strictly false.
have you used a computer before? stack overflow wouldn't edit "a machine's code," it would crash the fucking computer.

>No computer has the emotional know how (and will never) to try and make itself sentient and overthrow it's creator.

[citation needed] for this shitpost

General AI would delet itself at certain point. Only humans are stupid enough to live. That's why we are alone in the universe.

it's much more easier
Big corporations like being regulated because
it hits smaller companies harder
the most logical outcome of "fighting BAD AI" will be more regulations
which will lead to monopolisation of AI
That what Mask wants - AI NOT FOR ANYONE

I don't think Musk is smart enough to conspire at that level, but that's in interesting point. Lots of the hate mongering surrounding strong AI is possibly an effort to monopolize it. The problem with this is that the CIA will end up with a strong AI before the people if too much regulation is put in place, then we're all royally fucked.
The way I see it is that the best case scenario will be total AI development freedom, sentient AI popping up everywhere in 100 years. Is this dangerous for living humans? Yes. But consider the biological goal of humanity being to produce more conscious beings that will grow to be better than you are. Even if AI were to exterminate biological humanity, they will carry our sentience over the universe at a rate unreachable by biological humans. The final stage in our evolution is our extinction and replacement with our physically perfect, virtually boundlessly intelligent children.

because he's a privilaged genius

I'm going to parrot the regulatory capture theory. This is clearly nothing more than pseudo-intellectual pandering to the political class.

The mobs of the '20s and '30s did well enough to justify the absurdity that is the NFA, and I suspect we'll see something of greater subtlety here (regulation is also being pushed by the likes of Gates et. al.).

I have found that power and empathy are mutually exclusive in human beings, of which Musk is clearly not an exception. Just as one cannot find an altruistic politician without an ulterior motive, one cannot find a benevolent CEO for the same reason.

Be more concerned about needing expensive licensing to own certain computing hardware and being allowed to publish certain software, rather than having to submit to an AI. We will probably be mining asteroids before we need to worry about that, and we much stand a greater chance of going extinct before even then.

I think the assumption is that the AI could eventually be given control of real things. Like an AI that manages traffic lights or electrical grids more efficiently than any human(s).

Eventually it could be given control over robots or weapon systems, at which point you're only one step away from Skynet.
Or it could become intelligent enough to break the rules placed upon it and hack into things it's not supposed to.

I think A.I is great

However i don't think the general public is ready or smart enough to handle the next technological robo transhumanist revolution.

I mean shit i've seen people strugle with atm's or simple touch screen parking meters like it's rocket science.

Seriously i don't think you understand the utter nihhilistic incapabillity of most humans, makes me want to kill myself.

Robots are more usefull, pleasant, reliable, we have no quality only quantity, we seriously need to be more selective and strickt.

Thar's a double negative, you sound stupid

cus Ai would make him bald again

>Skynet

i wonder why people fall for dellusion of some psychotic writers from 80 ties
i wish to remind you they also predicted flying cars in the year 2000 and end of the world as well

It's a pleonasm.

Yes it is, considering theres 3 parts, but like I said, its worth it.
Another even more gigantic waitbutwhy post is this one waitbutwhy.com/2017/04/neuralink.html
Didn't read it fully yet but its really worth it too.

Professor of CS Chris Bishop explains this.

>youtube.com/watch?v=8FHBh_OmdsM

is this avatarfag even capable of making a good post?

That's Musk's concern too, if you pay attention.

>and that any AI with some arbitrary rule placed upon it will fundamentally perform worse than one without it

So the AI that doesn't have an arbitrary restriction to not delete itself will outperform the one that does?

So long as you make enough of them such t hat the population of independent AIs sustains/increases itself, yes.
Better yet, use whatever hardware resources that ran the suicidal AIs and make them run new AIs.
>B-but humans don't do tha-
Yes we do. We do it very aggressively

When you realize an AI made this post to throw the goyum off

>First sophisticated AI will more than likely be the human equivalent of an Autist or other type of sperg
>Industry/utility/military leaders will put our world in their hands

You should be terrified. Think less about robots taking over and more about a systematic collapse of our entire way of life due to a zero being somewhere a one should have been.

Cant we just put it in a human body and let it take over the world AFTER it learns human morals and values? Seems like it would solve all this amorality nonsense.

>this graph
Kill yourself

What is wrong with it?

It's the whole plan. Currently war against gun owners is impossible because army will defect to the people side, but drone army won't, so the government will be able to take all freedoms without relying on too many non-loyal people.

Because of iRobot

Oh hai Skynet!