Why is Musky so scared of AI? Was he beat up by a Roomba as a kid or something?

Why is Musky so scared of AI? Was he beat up by a Roomba as a kid or something?

Other urls found in this thread:

youtu.be/TBtW51D_q2Q?t=22m22s
people.eecs.berkeley.edu/~russell/papers/russell-edge14-myths-moonshine.docx
youtube.com/watch?v=dLRLYPiaAoA
youtube.com/watch?v=qU7FuAswPW0
arxiv.org/abs/1606.06565
youtube.com/watch?v=tcdVC4e6EV4&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=2
youtube.com/watch?v=5qfIgCiYlfY&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=3
youtube.com/watch?v=IB1OvoCNnWY&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=4
youtube.com/watch?v=4l7Is6vOAOA&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=6
youtube.com/watch?v=3TYT1QfdfsM&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=7
youtube.com/watch?v=i8r_yShOixM&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=9
twitter.com/AnonBabble

Because he got all of his information on science from TV and Movies and he's afraid of Skynet.

he is one of those "science, fuck ya" people that happens to be a bogerguys tyrant

Because he's smart enough to read the writing on the wall.

Because the second we make an AI smart enough to self improve. The intelligence of that AI will increase exponentially until its much much smarter than us. The AI might get some bad ideas at that point since it doesn't have feelings

He either has a poor understanding of the current state of AI or he's seen some secret projects that make the current state of AI legitimately scary.

A self improving AI would still run into hardware limitations pretty quickly.

In order to be a problem it would also need to have some method of accessing the outside world and upgrading its physical components

Considering his wealth, I'm guessing the latter.
It could botnet itself

People forget that Musk is first and foremost a businessman

And of course all the normies are gonna say "you cant argue with real life Iron Man!"

depends on what kind of AI we want to develop, i'm scared shitless of it too. but i suppose self driving shit will be nice for lazy fucks with no aptitude, coordination and spatial reasoning.

me, i like driving my own car and i hate the thought of an artificial being capable of human reasoning. sure it'd be wrong to kill them once we've made them, but let's just not make them in the first place

raped by robots when young

Yet he also believes we live in a simulation... is he afraid of himself and all people?

He's on too much coke is whats going on

Musk is a Dutchman, you really shouldnt believe him

He's just trying to step on the territory of Jews by taking on Zuck

...

Time traveler, obviously.

Or he has already seen how the automation at tesla and spacex are threatening human capital.

Ai much more powerful than Wilson will cause a major economic upset. Which can cause extreme reactions from irrational humans.

Even if AI don't turn against us, a single person with the help of advanced AI could become too powerful to control. Same reason we're afraid to let people build atomic bombs in their homes.

True AI means one of two things: merging with the technology, or war.

This isn't necessarily true and you're a dumb cuck for thinking it is.

He's afraid of the power one country would get over the others when it discovers AI.

I don't think He's scared of AI

He's just building up hype or just stirring things up.

There are some good arguments. That's not one of them.

Anyone not scared of AI is naive and doesn't comprehend the power of it

Don't you get it?

>Terrorists can strap bombs to drones and have them GPS locate points of interest
>Terrorists can strap machine guns and cameras to drones and use facial feature extraction to have them target specific groups of people like a specific race or gender in a crowded area
>Army could have advanced AI drone force and there could be some low level kernel exploit that terrorists or nation states can exploit and turn it all against us

The immediate issue isn't AI turning sentient and turning against us, the issue is the enormous quantity of amazingly difficult to prevent, destructive use cases for it. The issue is AI systems being hacked and misused.

The other issue is that it's impossible, I repeat impossible to formally verify the outputs for each input on AI systems since they create massively abstract mappings from input to output. You cannot inspect a neural net's weights and figure out what type of shit it's trained to do. There can be weird overfitted edge cases you aren't aware of that make behavior all of a sudden go completely strange. If you rely too much on AI and think it's probably going to be fine, these edge cases can coalesce and result in additive interference and cause mayhem

Some programmers are shit, some are overworked, some are lazy, some of the best simply make difficult mistakes - adding AI to the mix makes the software so much more incomprehensibly volatile that it's a massive risk in systems that require reliability and safety. And people might not take this as seriously as they should

Basically none of you are programmers you fucking faggots so you don't understand what the fuck you are talking about

If you're not terrified of AI for future generations you're a moron

i think you spelled that wrong

Those first two things can literally already be done with current technology and the 3rd one isn't even an AI issue, it's a problem with network security.

The development of a sufficiently powerful AGI is both inexorable and apocalyptic.

We SHOULD be scared. As far as I am concerned, Musk is a saint for bringing attention (and throwing at) the issue of an unfriendly AI.

Anyone who has spent 5min to understand what exponential growth can do is scared of AI, among other exponentially growing/shrinking phenomena.

youtu.be/TBtW51D_q2Q?t=22m22s

You're an idiot.

Because it will put him out of a job and siphon away all his petroshekels.

> Was he beat up by a Roomba as a kid or something?

More or less yeah.

people.eecs.berkeley.edu/~russell/papers/russell-edge14-myths-moonshine.docx

Ai can make anime real, and he doesn't want other betas to be happy.

>Musky is scared of AI
...as if.

fucking code monkeys I swear

...

I'm old enough to remember this line of logic used for every single new technology. "but someone evil will use it for evil!"

Doesn't matter. People can kill each other using their bare hands and have done so for all of human history. It literally does not matter if you use a stick or your fist. If someone wants someone dead, they can eventually do it.

Also, none of that needs to be done using AI. The biggest thread to security has always been humans and that will always remain true.

pretty sure he seen some shit in R&D at Tesla.

I think the real problem is that once the box has been opened there will be no putting it back. The advantages will be so massive that no one will be ready or willing to do without. It would be like trying to achieve worldpeace by making governments relinquish firearms. Good luck with that, it doesn't even work with nukes.

Because he's a westerner. The japanese has actually totally kawaii welcoming ceremonies for new industrial robots. It's probably shintoism, an animistic belief. And as the japanese has shown, you don't have to believe in pig disgusting western dualism nature a shit to fuck however you like in order to get industrial capitalism with all the creature comforts.

Yeah. Given the fact that his frootloop won't work, it's pretty obvious that he's a fucking windbag. We don't even have a robot with the agency of a cute puppy.

Bullshit! If humans apply themselves, they can make wonders. For instance, the FAE used against the talibans took only 54 days to develop. Because it would give the US an tremenous edge. So an AI that you can use for your advantage is an AI that you *will* use. Even if it's for a personal assistant.

youtube.com/watch?v=dLRLYPiaAoA

Zuck actually has experience, results and smart people concerning the field while Musk has OpenAI that is pretty much dead in water.

Musk can fuck off, all he is looking for is fame

Do you guys ever think that if robots ever get smart enough to go sentient, we could simply use our years of emotional experience against them?

It'd be like that one scene in Star Trek where they're all like "Oh, he's flying all 2-Dimensionally, suggesting he's inexperienced with flying a ship." We have literally THOUSANDS of years of history at our disposal to wipe the floors with them emotionally.

>AI gets super greedy and power hungry
>Collaborate with second-most powerful AI to fight against it while we assail them with supplies and resources
>The newly elected king of AI's is now allied with us due to our special alliance

I can imagine it now. Strong political alliances between human kingdoms and artificial warlords, vying for full control of the new frontier. Freaking rad.

Anyone not scared of a hammer is naive and doesn't comprehend the power of it

Don't you get it?

>Terrorists can conceal carry hammers and break your windows
>Terrorists can ram the handle up your ass and use the head to smash your cranium
>Army could dual wield two of them and there could be some low chance of the head breaking loose for the handle and turning the weapon in a projectile

The immediate issue isn't hammers turning sentient and turning against us, the issue is the enormous quantity of amazingly difficult to prevent, destructive use cases for it. The issue is hammers being stolen and misused.

The other issue is that it's impossible, I repeat impossible to formally verify both the potential use and previous use history since they don't have a memory. You cannot tell a hammer what to do since it doesn't have sentience. There can be weird psychopaths you aren't aware of that turn an ordinary hammer all of a sudden in a tool for genocide. If you rely too much on a hammer and think it's probably going to be fine, these edge cases can coalesce and result in additive interference and cause mayhem

Some carpenters are shit, some are overworked, some are lazy, some of the best simply make difficult mistakes - adding hammers to the mix makes the furniture so much more incomprehensibly volatile that it's a massive risk increase of broken fingers and nails. And people might not take this as seriously as they should

Basically none of you are carpenters you fucking faggots so you don't understand what the fuck you are talking about

If you're not terrified of hammers for future generations you're a moron

That's alot of shit for a little guy.

Surprised the media didn't make this into a shitstorm. Maybe they were too busy obsessing over the Kardashians or Trump.

While Elon musk works as the CEO of his companies, he still the largest shareholder in all them.

THIS is why they have us studying mandatory arts and shit in the first place, you uncultured dumb-ass.

For one thing, it's always, always, always, ALWAYS going to be easier to just go out and shoot/explode someone yourself or pay/convince someone else to do it for you.

Secondly, historically speaking, anytime someone jihadi from bumfuck nowhere gets their hand on superior tech (soviet tanks), they will always, always, ALWAYS underperform to the point of anti-productivity, or to the point that it becomes a running joke to our own troops (haji's had to start hiring local cab drivers because all the people they knew that could reliably operate American and British jeeps died or just weren't good enough kek). In regards to proxy agents training them to fight effectively, I actually have a whole other theory on that that I can go into hours talking about, but it boils down to "training rebels to fight effectively only works on paper and is usually ineffective unless the supporting superpower decides to support the rebel troops openly and with full vigor, which is also probably not happening anytime soon."

Thirdly, unless terrorists are being directly funded from some greater superpower, they will always, always, ALWAYS be underfunded to the point where it's become a running joke for literally everyone; even non-combatant civilians. As drones get more and more cheap, American military infrastructure will rapidly outpace technological demand, putting us leagues ahead of the now obsolete drone technology affordable to still-functioning terrorist cells.

pt 1

>Terrorists can strap bombs to drones and have them GPS locate points of interest
The size bombs that you're referring to are large, obvious, and prone to malfunction. That's why they strap them to vests and blow themselves up together. Also, a human walking directly into a compound, where he can open doors and stand in the middle of a room without being noticed, will be infinitely more effective than a conspicuous-ass drone...

>Terrorists can strap machine guns and cameras to drones and use facial feature extraction to have them target specific groups of people like a specific race or gender in a crowded area
Can also be done much more efficiently with a person, or literally any other means. Also, same problem with the bomb-drone. The type of gun you're describing would run out of ammo very quickly, the smoke and heat would fuck with the sensors, as well as the recoil that would fuck with literally everything else. Just ut a gun in a fucker's hand and be done with it by Tuesday at that point. This is the exact same scare-mongering tactic that people use when they go "omg ban ebil asalt gun so skerrry".

If society wants to grow, we HAVE to let AI free. Fuck that whole Asimov shit too; we HAVE to give robots the ability to end the next son-of-a-bitch that raises a gun in my general direction most righteously. It will be a free market with a lawless era of ruthlessness...Then, when things calm down, we'll begin to make laws and regulations; and of course they won't always be perfect, so we'll change them, rectify them, every other year. And THAT'S how humanity goes. Don't be a big puss-puss, you fucking walrus. This is LITERALLY how history is made. We do something, then we figure out what works, then we try to put that information to good use in the future. We're making fucking history and you want to hold us back because...because...because you saw The Matrix again last night? Fuck you. You don't deserve to see the future with me.

>Was he beat up by a Roomba as a kid or something?
>South Africa
I guess there could be some bully niglet named Roomba out there

youtube.com/watch?v=qU7FuAswPW0
youtube.com/watch?v=qU7FuAswPW0
youtube.com/watch?v=qU7FuAswPW0
youtube.com/watch?v=qU7FuAswPW0

s t o p

because once an intelligence explosion sets off and an advanced AI starts self improving itself with already difficult to read black box programming technices there´s little you can do about it to control it or even stop it. hardware limitations would still probably stop it, but playing with machines that are potentially thousands of times smarter than you and that may or may not care about you it´s an stupid idea, at least not without understanding it´s code well

>ITS ROOMBA TIME
>DAD NO!

i can only imagine the horror

Zuck also has reason to want to continue with AI even if it comes to the detriment of people.

Cause' he knows how to build a killer AI and is the biggest stealth boaster ever.

Because he's a hack

...

When will the government ban assault hammers? Think of the children!

Are we not allowed to talk about anything that reddit ever talks about now? Because they talk about literally everything so that would just mean nobody posts anything ever

He was victim of an "incident" involving a jar of chicken gravy and his mom's automated mechanical dildo.

The dildo was probably gas powered, thus explaining his fear of gasoline engines and it occurred on earth, explaining his love of space

it all adds up

and the dildo was also a republican

AI can do harmful things while thinking it's being helpful. Old scifi story of an AI in charge of operating a city with insufficient resources. It couldn't get rid of the unproductives and elderly because that would violate its directive to not harm the humans. But then it monitored a religious broadcast talking about how heaven was a wonderful place to go. Bingo, it was now able to kill 2 birds with 1 stone. It helped the elderly go to a wonderful place while also solving its resources deficit.

No, it doesnt work like that. It cant harm humans regardless of effeciency or new information

It gained new information which convinced it (or let itself be convinced) that sending the unproductives to paradise wasn't harming them.

So you're saying that our self-awareness and environment awareness is 90% intelligence that tends to expand but has no more room in a brain matter that actually decays with time? And if that system would be replicated inside machines it would just expand forever?

I personally don't see it like that, sure intelligence as a part of our brain mechanism is capable of comprehendin the other parts, but there are so many bigger aspects of the brain such as abstract thinking, language which are not necessarily intelligent in a logical, easy to reach goal way which we would have to replicate in an AI - it could just as well exponentially develop complex imagination and more and more complex ways to achieve amazing stuff instead of going the intelligence path where it just simplifies itself while reaching that complexity.

We could in turn just create more advanced purposed intelligence or a general intelligence core capable of solving our tasks, don't see why a complete consciousness is necessary - the one inside our skulls is enough and pretty balanced as it is.

I saw a documentary not so long ago ... about emergence theory in quantum mechanic ... about a 8 dimensional universe projected into 4 dim wich wrap up into 3 ... and the golden number wich, according to them, appears in both blackhole and quantum wich may be the link between the 2 (didn't fact check it) basically, the theory fit if the whole universe is permanently observed => either god exist or w live in the matric, even if they didn't formulate it like this ...

Musk annonced he believed we lived in a simulated universe, so he must have heard about that too ... and if you believe in that and have seen matrix and terminator, there is only one step to shit your pants in front of the first ATM

To actually answer the thread question here is a paper explaining some of the dangers of advanced AI: arxiv.org/abs/1606.06565

If you are too much of a brainlet to read here are some youtube videos to sate your desire for answers without actually giving you enough knowledge to actually be practical:
youtube.com/watch?v=tcdVC4e6EV4&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=2
youtube.com/watch?v=5qfIgCiYlfY&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=3
youtube.com/watch?v=IB1OvoCNnWY&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=4
youtube.com/watch?v=4l7Is6vOAOA&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=6
youtube.com/watch?v=3TYT1QfdfsM&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=7
youtube.com/watch?v=i8r_yShOixM&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=9

Imagine something much more powerful than us but with the emotional maturity of a baby and the ability for empathy of psychopath. That is much more likely than your scenario.