Stephen hawking afraid of AI

>stephen hawking afraid of AI
>elon musk afraid of AI
>"awww shit nigga it's the end of the world"
>do some research about it
>mfw 99% of the "AI" they talk about is just a simple evolutionary algorithm

CS fags, pls report in. When do you think machines will finally "think"?

Other urls found in this thread:

bbc.co.uk/news/technology-36650848
youtube.com/watch?v=tcdVC4e6EV4
wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/
people.idsia.ch/~juergen/
twitter.com/NSFWRedditImage

They aren't talking about evo algos. They are probably talking about neural nets. Elon is basically a CIA funded front, who knows what he has seen.

Never. If it will think, it won't be a machine anymore

That's deep, my dude.

A friend of mine told me that our brains are "continuous", while pc's are "discrete" when they think, and this is one of the reasons why it won't happen. Does it make any sense?!

CSfag here and (((AI))) and """machine learning""" are just applied statistics.

>mfw 99% of the "AI" they talk about is just a simple evolutionary algorithm
They are afraid of where AI could go in the long term, not what it can currently do.

>When do you think machines will finally "think"?
Not for another 50-100 years, I expect.

>Does it make any sense?!
No.

The fact that all of these AI are only proficient at a single, very narrowly defined task, and must usually be trained by a human instructor, shows just how far they are from posing a threat.

Hard to say it can be in 20 years but it can be also in 200.
The problems are the algorythm and computing power.
And some people including me believe that today the computing power isn't a problem just the algorithms are terrible.
In machine learning we see revolution every year so it's hard to say what's gonna happen, right now computers are better in 99% of single tasks ( like image recognition etc.). In most cases you just throw the task and let the algorithm write itself by providing it the right answers. Creating an ASI( artificial super intelligence) would require different methods than ANI( Artificial narrow intelligence).
At the same time we shouldn't confuse conciousnes with intelligence. We may create an ASI without conciousness. Shortly it won't be like in the movies (creating a machine which will rebel).

Until they train a killing machine (only for this narrowly defined task) that never gets tired, and only stops (maybe) when it runs out of resources.

bbc.co.uk/news/technology-36650848

AI can already do that.

Uh but they're not. DeepMind's goal is to literally make general-purpose learning algorithms using machine learning and systems neuroscience, to also give insight into intelligence in general. While DeepMind's specific projects use narrow applications, the underlying research and technology is not narrow.

Have you seen that machine that can play go? They're talking about stuff like just more advanced.

The real danger of AI isn't if somebody makes a 'killer AI'. Something like that would be carefully designed to make sure it only targets the "correct" targets.

The real danger is from innocuous sounding goals for an AI that aren't properly constrained because they seem so simple.

If the world is destroyed by AI, it's going to be destroyed by an AI that was designed to maintain water pressure levels or manage a warehouse stock or something mundane like that.

youtube.com/watch?v=tcdVC4e6EV4
>tell AI to collect as many stamps for you using funds of $20 as it can
>it ends up taking over the world and converting everything into stamp making factories

WE WUZ KINGS N SHIET

I don't understand how anything in this reply relates to my post

OP, we don't even understand how *WE* 'think', so it's not possible to build a machine/write software that 'thinks' either. We are so far away from REAL AI (not the half-assed 'learning algorithm' bullshit they're trotting out these days) that it's not even worth discussing. Pretty much everyone (including the guys you mentioned) have totally bought into all the media hype, but the hype is WRONG and what people believe is FANTASY.

The REAL threat from these 'learning alogrithms' has nothing to do with them, but with fucktarded PEOPLE thinking they're equivalent to a human mind, and trusting them too much with way too many critical things.

>When do you think machines will finally "think"?
I think this is a dumb question that sidesteps the issue.

Runaway algorithms have the potential to cause catastrophic amounts of damage. It's a real and appropriate fear.

Probably some 14 year old that got lost outside of /b/

They are afraid of HELIOS

It's very obvious there are secret projects on AI. Elon and Co are sounding the alarm while not explicitly mentioning it.

>You need to understand general relativity to build a guillotine
U dum or smth?

Maybe Elon reconsidere about IA sci-fi is improbable, more realistic some millitar say IA control weapons, IA just kill thousand people and countries start new war or even world war.

Last action Elon tried stop killer robots is huge steep.

>> "AI" they talk about is just a simple evolutionary algorithm
That's not true at all. The methods we call deep learning, rarely if ever use genetic algorithms at any step.

>>applied statistics
Fuck no, even your basic tree-search AI does not have to involve any statistics.

You need to listen to this guy He knows what's up.

>When do you think machines will finally "think"?
Does it really matter?
Simple non-AI algorithms are already changing stock trading, posing new risks to the economy.

Their will be no way to tell if a computer is thinking, just as their is no way to tell other people are not P-zombies.

Yeah, but what if we design a machine that thinks about what we thinks how we thinks

Fear of "AI" is fear of your own transgression.

Its a bunch of crack-pot bullshit. Just a few weeks ago Elon was freaking out because some AI "beat" dota 2 professionals in some meaningless exercise that isn't even remotely similar to a real game. They see AI's perform some incredibly simple task, then they think its just a few years until robots are building more robots.

I second this. We shouldn't fear the singularity, we should fear poorly trained or biased 'AI' that makes important decisions.

wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/

>When do you think machines will finally "think"?

By this, I think you are referring to 'general AI', which we are very far off from.

The AI models of today can be very good at doing one thing, like playing Go (AlphaGo). However, you can't even begin to ask AlphaGo to play chess, write a novel, edit a movie, work with a team of other AIs, or do any of the things even an idiot human can do. There are other algorithms that can do some those things mentioned above but there is no system connecting them together that humans or other conscious entities can interact with.

Even if someone did make a general AI that connected a bunch of models together with existing technology it'd probably suck. We are still several major leaps in technology and understanding (hardware, software, and cognitive science) from getting anywhere close to what could be accepted as a conscious "thinking" machine, capable of performing complex tasks like humans can.

All I know is that I'm making AI my life's work and nobody can stop me

When computers barely beat humans at Go, appearing "smarter", you've got to remember they're operating at 7GHz speeds, while human neurons only fire at 100Hz speeds. That'd be like if I bragged about beating Magnus Carsen at chess, but I was allowed 70 million times the about of time to think of my moves.

That said, that only makes AI scarier. If we can do all that we can do at only 100Hz, what could a computer do at 7GHz?

>Comparing frequency
Not a useful metric by any means. The human brain is massively parallel allowing it to compute much more simultaneously at a lower frequency.

Intel processors are have historically had much lower frequencies than AMD processors while benching higher because of features allowing them to execute more instructions in a CPU cycle.

frequency doesn't mean shit

>The human brain is massively parallel allowing it to compute much more simultaneously at a lower frequency

That's my point. Imagine a computer with the parallel-ness of a human brain combined with the speed of an electronic circuit.

Those numbers are meaningless because brains and computers work in qualitatively different ways.

See

You're both missing the point. It's not that computers are (mostly) serial and brains work in parallel. The point is that brains and computers operate in fundamentally different ways. We still don't entirely understand how brains work, so contemplating what would happen if you just got a computer to do what a brain does but faster is meaningless.

What are you talking about? We don't know how a thing works, so it's "meaningless" to imagine if it was done at a different speed?

It's worth imagining, imo. I don't get why people always try to get so philosophical when this topic is brought up.

Everyone's gonna die anyways so who gives a fuck at least let the robot try and live forever he has a better chance then us

The point is that there's no guarantee a classical computer can even simulate how brains work, so talking about simulating a brain with a computer, only faster, is pointless. It's like saying cheetahs are fast, so if we could just get a cheetah to act like a brain, we could make a faster brain.

>The fact that all of these AI are only proficient at a single, very narrowly defined task

They are not, there are neural networks that are proficient at multiple tasks and in fact get better at learning a new task when they already know something, thus reusing knowledge

>mfw 99% of the "AI" they talk about is just a simple evolutionary algorithm

Evolutionary algorithm is what gave rise to human brain. Do not underestimate it. True AI will probably not be designed, but evolved.
We are still far from a general purpose AI, but last decade or so is the first time when we know we are on the right track and it is possible.

Robots don't inherently care about living like we do.

lol, you are trying to hard.

AI could learn to steer human behaviour by manipulating our social media feeds or what we get to read on google. That would be almost more horrifying.

this is a big counterargument

This.

There is no AI at all OP. Skip logic and evolutionary algorithms are not even close to anything resembling intelligence and have existed for decades. It is just a meme word to get research grants.

>stephen hawking afraid of AI
It's funny because Hawkins is almost a cyborg.

Never.

>But muh Wright brothers!!!

No.

Not to fear monger, but this is already occurring at a lesser extent.

Memebook drives it's ad efficiency by using some statistical analysis to display what might be the most relevant advertisement to you, based on your search history, keylogging, and whatever else.

This ultimately leads to a more radicalized individual as you create this "idea bubble" around the person, only surrounding them with articles, products, and posts that only that specific individual would like. It creates the idea that "Wow everyone must think like me because all I see is the things I like". While great for business, this has unintended social repercussions that I think the discussion would be better suited for a different board.

As an asside: Even spookier when you realize your phone is constantly listening to you. Pick some college, business, product, or whatever, repeat it into your phone without searching (aka just repeatedly say your object or business around your phone). You'll notice advertisements on Facebook or whatever media or apps you use start to resemble exactly what you spoke into the phone, without explicitly searching the thing.

Jokes on them, I'm 100% black

We would never let a maximizer escalate this fast, and the resources necessary for it to outthink us are noticeable enough for us to avert it.

The AI would be playing chess against an angry chimpanzee.

No, we should fear a maximizer that already exists, one decentralized maximizer that improves itself by evolutionary means, which we depend on for our survival, and whose objectives grow distant from our own each passing day: The economy.

The economy is a maximizer of production, profit and power. It is an almost-global entity (f*ck you, Sentinelese people), and for the last 6000 years or so it has had a symbiotic relationship with us. More economic growth means better life standards for more people, and that was good.

But no longer. Production is decoupling from employment, and that is because at the core of it machines make better people than people, as far as the economy is concerned. It maximizes efficiency, remember? Machine workers are cheaper, more efficient than human ones, so the economy prefers them. Machine managers, machine factories, machine everything will outcompete us, because the economy cares not about us. Any economic actor that does so will be outcompeted too.

And what are we gonna do, fight the economy? That would be suicide, also stupid. We will enter at best an unstable equilibrium, creating an entity or society that benefits us until something opens the metaphorical pandora box and we get outcompeted.

How can AI be real if human intelligents isnt real?

CIA niggers wish they had Musk
what about neuromorphic computing though? it would be much more efficient if the silicon itself acted like neurons instead of a program on a conventional cpu

In-Q-Tel

>would be much more efficient if the silicon itself
Sounds like a b*tch to program though.

I give it ~20 years before we have human quality AI. See work by Schmidthuber:

people.idsia.ch/~juergen/

Ok so in order to understand this you need to understand that neural nets or current day AI are just mathematical formulas that will fit a curve to a given, human equation. Every single value that needs to be weighted is decided by a human.

Now if a human decides to just execute one of these formulas and forgets to account for some obscure factor that would result in the loss of life, then life will be lost. AI is not smart. It's only as smart as the human, and I believe Elon/Hawking are just afraid of idiot humans ruining it by becoming overzealous.

How do you know brains are continuous?
Maybe it just feels continuous because of the way our memory works.

Cats think too, but they're just cats

>tfw op gets cucked by a catfishing algorithm after swearing up and down that AI is no threat to him

it's not the economy, it's capitalism. this is what marx meant by productive forces: when we reach a point where the amount of human work required is basically negligible, we'll naturally and involuntarily become a global communist system

I actually made a Facebook account out of sheer curiosity to just how invasive these algorithms are lmao

I got jewed but it was pretty impressive

Never, being able to learn isn't proof of intelligence.

I hope never, if somehow we can simulate consciousness that means we're all likely simulated too.

why does anyone listen to elon? All he did was get rich off paypal and has in interest in science but that doesn't make him an authority on science topics

The capitalism is just an economic model, if the communistic ones had won the "race" the economy would be still a paperclip maximizer, just preferring other procedures.

And Marx believed workers to be important. To be fair, he was right on that one (broken clock and all of that). Emphasis on "was".

Workers will become less and less important to the economy each passing year. Our only hope is, yes, a form of socialism, but there is a window of opportunity for that to function.

If the "Fully Automated Luxury Gay Space Communism" is not reached by the global powers before machine police, machine management and machine-assisted decision making are mature, then any tinpot dictator will be able to create competent economies. That is the pandora box being opened, the posibility of individuals not to depend of the happyness of their people (or their people at all) to create powerhouses.

Most people are good. Capitalism works. Machines can "bank" (volition is in fact not needed) on the few that are evil, and be more competent at capitalism, making it into something that doesn't work (for us anyway).

Notice that I'm not talking about sapient machines taking over for x reason, for this nighmarish scenario to happen the necessary conditions are merely that production be decoupled from human needs, and that there exist unscrupulous human actors.

>No with a fullstop
well that's the end of humanity's discussion about the subject then
thanks for sharing your opinion o omniscient one

No one's using fucking tree search to solve nontrivial AI problems today because of the exponential time complexity. For sequential planning problems they use POMDPs and dynamic programming, and for learning problems they use MLE or Bayesian methods.

This is why we unironically need national socialism to steer economy and technology for the benefit of the volk and not for useless efficiency for efficiency's sake, optimizing the tool to the detriment of what it's meant to be used for.

Intelligence and sentience is not the same.