Artificial intelligence

>artificial intelligence
>deep learning
>machine learning

Are these as groundbreaking as popsci articles are making them out to be or is it all one big meme?

Isn't it all just execution of fancy linear algorithms with the 'value' attributed to certain inputs 'tweaked' randomly until certain values match an outcome determined by a human operator? How is this 'intelligent'?

How can it 'learn' when there is no physical rewiring of neurons (like our brain) to encode new understandings?

I have no doubt it's useful in the same vain that using a calculator to calculate 138181 x 8389393 is useful. But I don't see it replacing human involvement because it's not truly learning or understanding. It's executing human made algorithms to match a human made outcome.

I'm also a dumb dumb so tell me I'm wrong Veeky Forums

Other urls found in this thread:

medium.com/intuitionmachine/pathnet-a-modular-deep-learning-architecture-for-agi-5302fcf53273
wired.com/2017/05/revamped-alphago-wins-first-game-chinese-go-grandmaster/
usgo.org/news/2017/05/new-version-of-alphago-self-trained-and-much-more-efficient/
techcrunch.com/2017/05/26/googles-alphago-ai-defeats-team-of-five-leading-go-players/
youtube.com/watch?v=i6K3GI2_EgU
twitter.com/AnonBabble

>muh strong AI

y meme frogger

Hand waving 'strong AI' doesn't dismiss what OP said

>Hand waving 'strong AI' doesn't dismiss what OP said

Yes, absolutely. Deep learning enabled computers to beat humans at Go.

>> how can it 'learn' when there is no physical rewiring of neurons
Some approaches allow this:
medium.com/intuitionmachine/pathnet-a-modular-deep-learning-architecture-for-agi-5302fcf53273
A better question is why humans are able to learn without being able to do backpropagation.
>>like our brain
We don't know how our brain works, but we do know our brain doesn't work anything like deep learning or 'neural networks'

Generally they use fancy math to tweak the weights over random adjustments

It does have some level of learning/understanding given that they attempt to mimic human predictions, often with data that would be less useful for human based prediction

It's still very much a human guided process. I mean, the AI isn't re-writing algorithms or rules for itself to follow. It's following pre-set rules to match a human directed outcome.

For it to be truly 'intelligent', it will need to create its own algorithms, rules and outcomes. But I don't see this as possible for something that is essentially linear math equations compounded on top of each other. It will make mechanical tasks easier but that's about it.

oh you're just memeing. my bad

It isn't rewriting it's own code, but it is a flexible hard-coded algorithm in that the weights and inputs form their own internal algorithm?

Yes, that's how I understand it. The inputs it assesses are human decided (ie only look for edges, or feature X etc) and it 'adjusts' the weight attributed to the inputs via trial and error until the inputs provide the output (designed and intended by a human).

I don't see what's groundbreaking about this process. I definitely don't see how it will 'unlock our understanding of intelligence and consciousness' as Demis Hassabis constantly memes in every popsci article

It's a fucking meme like most of the shit peddled on this board and r/Futurology.

> and it 'adjusts' the weight attributed to the inputs via trial and error until the inputs provide the output

lol, Performs a trial of it's weights with some input and adjusts weights by error

I read a few books on AI and went to the r/machinelearning to see what the community was like around it.

I have never met so many condescending, thinks theyre a genius, cucks in a community in a long time. Even worse than here.

>Even worse than here.
any highlights?

how is that different?

it's an error in nearby space type measure, so it usually won't trial anything worse than it currently has, where trial and error might
They can get stuck in less than optimal solutions for the same reason

Thanks. Is the initial trial of weights random?

>How can it 'learn' when there is no physical rewiring of neurons (like our brain) to encode new understandings?

just wait till the programs can rewrite their own code based on new information.

That's not possible with our current understanding and use of neural networks

>not possible with our current understanding and use of neural networks
don't tell me they couldn't pull together all the worlds programmer geniuses and throw an unlimited budget at the project and come up with sentient AI within 10 years. We know its just a matter of time and effort. Sure it would be complicated as hell with billions of lines of code but a lot of the work could be done with programs that write programs.

>> truly 'intelligent'
And what pray tell is true intelligence then?
>>it's following preset rules to match a human directed outcome
So did you. Now if we knew how to arrive at that outcome we wouldn't need such techniques
It enabled deepmind to beat humans at Go. That's a pretty big fucking deal because the Go game tree explodes and becomes too large to search using brute force as Deep Blue did with Chess. Instead the algorithm used learned a policy for expanding tree states and predicting the value of tree states.

You can do pre-training type stuff, but yeah there will still be a random initialization to begin that can affect the end result

AI researchers don't care about if the AI is "really" learning and "really" intelligent. That's an issue that is more interesting to philosophers, AI researchers just want concrete results.

You get better results when you know what's going on though ;D

>And what pray tell is true intelligence then?
Anything that is capable of understanding

>So did you
Evidence?

>It enabled deepmind to beat humans at Go.

Like when deepblue beat a human at chess? Or like the hard mode on a chess game can beat a human, until it learns its mechanisms and can overcome it? If Ke Jie could vs deepmind more than three times, I'm sure he could read its pattern and overcome it.

It's pretty unfair that alpha go was literally trained on Ke Jie's videos and Ke Jie went in against it with no experience. Yes, deepmind was able to game Ke Jie and other experts by being fed all of their videos and then some. Likewise, if they were able to play more than a three-match game against deepmind, the results will be interesting to see how much deepmind can adapt accordingly.

I wouldn't be so willing to take the results as evidence that AI is becoming truly intelligent from AlphaGo's results alone.

>And what pray tell
who speaks like this

Computer vision is a good one, pictures are pretty much a general purpose input
Language is probably a bit the same

>don't tell me they couldn't pull together all the worlds programmer geniuses and throw an unlimited budget at the project and come up with sentient AI within 10 years

that's exactly what I'm telling you

Doing the programming isn't the only problem, you dumb fuck.

>that's exactly what I'm telling you
>Doing the programming isn't the only problem

seems we have some brainlets here who lack the vision to accomplish the goal.

There's only one way to believe that machine learning won't eventually become the most powerful tool mankind has created thus far, and thats to believe the brain has some kind of magical quality about it which cannot ever be quantified.

You're right that at the moment, none of the programs are 'truly' learning or understanding in the sense that they can then apply that knowledge to broader, contextually unrelated areas (that would be a general AI, which is a far way off). But right now, there are a massive number of targeted applications where the AI outperforms humans, and you can bet that humans will be outright replaced by these algorithms in the near future.

> isn't it all just fancy algorithms to match a predetermined outcome? how is that intelligent?
How is what the human brain does any more 'intelligent' than that? If we attempt a new task, we fail over and over until our synapse strengths adjust such that we are finally able to complete the task. Mechanistically, there's not much difference aside from the size and complexity of these adjustments.
,

>blue brain failure

>blue brain failure
only spending 1 billion and thinking they could accomplish the goal

of course it failed.

No, you lack the vision to identify the obstacles. Firstly, training a neural network is a exponential time problem, and secondly, we rely on neuroscientists to discover more stuff, so we can emulate consciousness. This is a problem so non-trivial, that you can say it is computer science's 101% efficient machine.

Literally a single user had any clue about ML in this topic thus far lmao.

No retard it's not random settings of the weights. First of all, that's only in the vector space model of ML dealing specifically with discriminative models. Secondly, you optimize the model and the loss function via convex optimization methods (generally).

Now a note to the cucks who don't understand ML but are jerking it off: hey dipshits Vapnik, Valiant, etc derived some beautiful results on learning theory a long time ago. They only hold true under certain assumptions. Deep learning will not "learn" everything given enough time and resources... there are other research trends trying to tie together logical (relational) understanding as well as statistical. Also fields like NLP have studied the problem of understanding extensively.

Neural networks working and being loosely based on biological neurons is simply a hilarious coincidence. Neural network research is pure math, there is no neuroscience involved except to try to get published in general science journals lmao.............

>Isn't it all just execution of fancy linear algorithms with the 'value' attributed to certain inputs 'tweaked' randomly until certain values match an outcome determined by a human operator?
If you remove the word "randomly" and change "a human operator" to "a data set" this is pretty much accurate.

>The inputs it assesses are human decided (ie only look for edges, or feature X etc) and it 'adjusts' the weight attributed to the inputs via trial and error until the inputs provide the output (designed and intended by a human).
Not quite. It will end up looking for edges or different features via convolution filters if those features help it to reduce its error rate, but its never actually "told" to look for them. See for example the image classifier that learnt to identify dumbbells but thought they always had to have an arm attached.

>>Anything that is capable of understanding
and what does it mean to understand something?

>>Evidence?
Your braincells are following preset rules and you didn't learn to read or do many other activities completely without human help.

>>Like when deepblue beat a human at chess?
Yes. But again, Go has such a larger branching factor than chess that it makes brute forcing the solution as Deep Blue did impossible. And again, no machine has ever been able to defeat a master Go player.

>>If Ke Jie could vs deepmind more than three times
he had already played against AlphaGo in an online match and lost before the big one:
wired.com/2017/05/revamped-alphago-wins-first-game-chinese-go-grandmaster/

>> literally trained on Ke Jie's videos and Ke Jie went in against it with no experience
Bullshit the version that beat Ke Jie was entirely trained on games that the previous version of alphaGo played against itself: usgo.org/news/2017/05/new-version-of-alphago-self-trained-and-much-more-efficient/

AlphaGo also beat a team of five human go experts:
techcrunch.com/2017/05/26/googles-alphago-ai-defeats-team-of-five-leading-go-players/

Of course, the real news is that AlphaGo was more efficient in exploring moves than Deep Blue did. Deep Blue investigated 100 million moves per minute, the version that beat Lee Seedol investigated 100,000 moves per second. The version that beat Ke Jie was 10 times more efficient and ran on one computer with a TPU.

>>truly intelligent
And who the fuck cares? Machines are now doing things they weren't capable of doing before. A big one out there is robot grasping. People have tried for years to solve this problem without much progress, now, with deep learning we're actually making progress:
youtube.com/watch?v=i6K3GI2_EgU

You still haven't defined whatever the hell it means to be 'truly intelligent' by the way

>and what does it mean to understand something?

To discover a rule about something and apply it in a broader context.

It was also trained on videos of his games.

> Deep Blue investigated 100 million moves per minute, the version that beat Lee Seedol investigated 100,000 moves per second. The version that beat Ke Jie was 10 times more efficient and ran on one computer with a TPU.

And how many do you think Je Kie or Lee Sedol's brains processed? A fraction of Deep Blue's and, yet, they're still able to go toe-to-toe with a computer. Neural networks allow more efficient automation but they don't emulate the way we think at all.

>And who the fuck cares?

What the hell are you in this thread for? Nobody is denying that it is useful.

>How is what the human brain does any more 'intelligent' than that?

The fact that a brain doesn't need 100,000 cat images to vaguely identify what a cat is.

>There's only one way to believe that machine learning won't eventually become the most powerful tool mankind has created thus far, and thats to believe the brain has some kind of magical quality about it which cannot ever be quantified.

That or we somehow create circuit chips that can automatically create rewire itself and create new physical connections

>The inputs it assesses are human decided

It doesn't have to be human decided. If you link together multiple neural network, you can have one network manipulate the training data for another. As computational power increases we will witness how enormous complexity can emerge and solve problems in ways we can't. If that's not groundbreaking I don't know what is.

The "it's not really learning" argument is going into philosophy. What is it to learn exactly? Can't I use the exact same argument about humans? "We don't really learn, it's just algoritms programmed by evolution."

>It doesn't have to be human decided. If you link together multiple neural network, you can have one network manipulate the training data for another.

Yes but the very first input has to be a human one. Also, the parameters for the inputs that any network considers is defined by humans.

It's statistics on steroids basically. Basically a bunch of layers each qualifying the previous (depending on the formula). I'm not sure if it will replace everyone's job as "data scientist" are out of a CEOs budget. It's esoteric and sounds too expensive.

It's not groundbreaking because it's "intelligent", but simply because of what it allows you to do. The term "Deep learning" makes no metaphysical claim that the machine "learns" as if it's human, the term is like decades old and simply means you solve problems by training an artificial neural network.

If you're actually interested in how it works, stop reading those popsci articles and go do some real research on it. It's pretty interesting to fuck around with.

>>It was also trained on videos of his games.
not the version that played Ke Jie. Source or GTFO.

>>And how many do you think Je Kie or Lee Sedol's brains processed?
We don't know how the brain works so I can give no answer here.

>>What the hell are you in this thread for?
OP was asking if this stuff was groundbreaking, my answer is that it is.

>because it's not truly learning or understanding
It has learned the game of go better than humans can through only the rules.
Understanding is another matter, but people don't truly understand things either. We just build shitty little models in our head based on previous results - most of which turn out to be wrong. At least AIs don't have the luxury of pretending they're not wrong.

To elaborate on this - suppose we plug in some rules for mathematics into alphago, demonstrate some human proofs, and give it a kind of goal that's incremental and quantifiable/put it up against some kind of solution checker, how would we do for the millennium problems?
I think if we were to attempt that experiment, alphaAI would very much outperform people at performing the jumps and hoops that link one mathematical step to another, it's just that our current model of math in its entirety is convoluted far beyond something simple like the rules of go - not to mention a lot of our systems are incomplete. That's the short-end of AI right now, the capacity to understand reality BY ITSELF. Our lack of understanding regarding reality/generally computer-unfriendly ruleset is the thing that's stopping AI from
conducting physics/math research, not the AI's ability to conduct algebra and so forth.
It's a bit of an irony that we invented computers to help us with math and end up having problems describing math to it, but such is life. I fully expect the fuckers to get it done eventually, by the way, at which time I expect everything short of NP to be solved.

No. Just no.

I mean the alternative is that math is not really interconnected and one "discipline" of math rarely helps solve problems of another nature at the more abstract levels.
And that's just not true.

>gets his ass kicked by AI in go
>H... How is this intelligent?

>gets his ass kicked by machines in multiplying thousands of random thousand-digit numbers
>I... I dis.. I disagree that we should make this calculator the president of the US
on the second thought probably would still be better than Trump

2 eyes * ~20 fps * 60 sec gives up to 1200 stereo-images a minute

From a theoretical perspective, nothing much changed.

People just figured out that they can do a lot with machine learning if you apply a lot of processing power and time to it and huge data sets.

What changed in the past decade is that we are now very good at collecting huge datasets. Like, the decade beforehand we already had abundant processing power but now we also have huge data sets.

Just look at what Google is doing with reCAPTCHA.

Then 1 cat video should be enough for AI. shit why didn't google think of that?!

>The fact that a brain doesn't need 100,000 cat images to vaguely identify what a cat is.
But if you take a human brain that has never seen a cat before - never seen ANYTHING before - and show it a cat picture, will it be able to identify another cat picture from a selection of pictures?
No!

Instead of AI we should just stick with I

Hook up a million raven brains into a supercomputer

>But if you take a human brain that has never seen a cat before - never seen ANYTHING before - and show it a cat picture, will it be able to identify another cat picture from a selection of pictures?
>No!

Proof?

i read somewhere that google's ai created its own language so they had to limit it to human language or someshit idk im a brainlet but this stuff seems interesting