Is machine learning a meme or is the foundation for a new frontier?

Is machine learning a meme or is the foundation for a new frontier?

Attached: 1431575096576.jpg (841x755, 144K)

Other urls found in this thread:

en.wikipedia.org/wiki/Deep_belief_network
livemint.com/Sundayapp/zDSjhU5IzcuI7ypo6W4WtL/Why-data-science-is-simply-the-new-astrology.html
medium.com/ai³-theory-practice-business/understanding-hintons-capsule-networks-part-ii-how-capsules-work-153b6ade9f66
twitter.com/NSFWRedditVideo

bump

make a machine that can generate its own arbitrary code to solve it's own problems.

isn't that the end game?

Take this from somebody doing computer vision research and machine learning on a regular basis

It is overhyped everywhere. I can't tell you how overhyped it is. The worst area is within software engineering positions. They hire data scientists that know the basic ML algorithms and have no idea what the math behind each one is. Undergrads are next. Even if they've taken an ML course, they most likely don't know the math behind the algorithms. So finally, the graduate and research level. You'd think it isn't overhyped at this stage. And honestly, it isn't NEARLY as overhyped. It's only slightly overhyped. However, researchers tend to think "huh, what happens if we just randomly plug an ML algorithm onto this?" Oh wow! It works! Now, there is some research on finding useful features beforehand, but SOMETIMES it's almost a scapegoat.

Regardless:
1) Industry is by far the most overhyped (they literally use buzzwords on the regular) (also, this assumes the employees don't have PhDs or a Master's in ML or a related area)
2) Undergrad is the 2nd most overhyped
3) Graduate/Research is the least overhyped

And to be honest, the graduate/research level understands what they're working with and how every single algorithm works, but they love using ML algorithms as their default. So to an extent, they only overhype by a teeny tiny amount and sometimes use buzzwords.

ML has capabilities to be 'very strong', but you have to use it in the correct way. ML algorithms aren't even CLOSE to creating robots that will take over the world. Trust me...

it's fake as fuckwith fraud language for fucktards koolaid drinks

" >en.wikipedia.org/wiki/Deep_belief_network
exactly what the language ruses and me pointing out what it means

"When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. "
>Initial loading data is sent into the network, it is >programmed to keep a coded track of these >inputs and manipulate that set of data further, >with probability coding

"The layers then act as feature detectors.[1] After this learning step,"
>It didn't LEARN anything...it followed the >programmed commands and carried them out >on the data sent into it

" a DBN can be further trained with supervision to perform classification. "

>LOL "with supervision" is the magic phrase here that tries to imply the former loading and carrying out programmed commands wa "learning", because in this later case it's SUPERVISED (new human programming operations), while in the former it was not "SUPERVISED"...

They are using magic faery trick language so god damned dummies can be fooled.

that's the game that never comes, but subtle lies and frauds will make claims, and eventually the gibberish will be so arcane and so buried beneath former bullshit tiers of tech twango words that it will take years to pull apart...

So the liars will win this one, the machine won't do shit except what it's programmed to do, but already the lying assholes are telling tales to the contrary

This. It's a literal lie. Every time you see '''''''Microsoft'''''' creating an ''''AI''''', it's a completel lie. Even the story about an '''''AI''''' creating its own programming language was a complete exaggeration and lie

Here is the big deal of Deep learning in a nutshell. We now have a reliable toolkit of techniques to learn differentiable functions AND programs.

The implications of this are hard to overstate. This is huge. This is the first stepping stone to telling computers what we want them to do, rather than telling them how to do that thing. Declarative programming for the real world.

It isn't that simple. Deep learning isn't telling the computer what we want them to do. It's literally all statistics and mathematics. The only thing that makes it fancy is a computer can simulate those equations a million times faster than a human. All it does is learn little weights for a larger neural network. It can't make sense of the data at a high level like a human can. Object detection? Sure, the deep learning model can take the image pixel values and give you a probability for each of your 20 classes of objects, but it's up to you to make sense out of it. A computer doesn't truly know what an object is.

livemint.com/Sundayapp/zDSjhU5IzcuI7ypo6W4WtL/Why-data-science-is-simply-the-new-astrology.html

I don't agree with his arguments but I kek at the title

Thanks, Google, I totally trust you now.

;)

Machine learning is modern statistics. That's literally all it is. It's extremely useful but it's not magic.

Right now it's a meme. What researchers call 'machine learning' right now is really just advanced statistics. We are nowhere close to creating a general purpose learning algorithm on the level of humans.

For the record, I am only responding to this post because those girls are sexy.
That's how we deal with malware.
Ever used a gameshark?

When you complain about hype in excess, you are generating hype by contrast. Thus, your complaint ironically makes you a hype-man.

The extension of your criticism is that people put too much faith in ML as a field of its own, and not enough trust in fundamental Mathematics, Computer Science, and Electrical Engineering. There is still a good amount of money to be made in ML, if you treat it as approbation of applied math- then it creates economic efficiency where it replaces human error.

Your ordered list need to be recast as a venn diagram. Private research and small companies have the best market value, whereas undergraduate degrees and big companies have the most horribly inflated value. Graduate students are all over the spectrum, and the best thing you can do is to classify them by age. A 28 year old with a PhD got lucky and was favored by university nepotism. A more senior researcher is always going to be more valuable, because a majority of implementation is always going to be based on extant infrastructure and systems.

This is NOT the end game of ML but Veeky Forums fags will tell you that people think it is in order to discredit ML

Jumping headfirst into a Wikipedia article does not compensate for a lack of training in graduate level mathematics. Try reading about the Borel Measure before you attempt to interpolate what is meant by the word 'learning' in this specific context.

No, we already had genetic algorithms with evolving control structures. It seemed like that research slowed down after I got plagiarized by a private school guy who misunderstood my lesson about the exponential family of distributions as an equivocation between NNs and GAs. Short summary: Georgetown University completely fucks up the University of Maryland, because it's a bunch of rich southerners who only come north of the Potomac to rob people.

Why don't you go look into proof assistants.

>This is the first stepping stone to telling computers what we want them to do

It's not the first, because there was a long time without any visual interface.

>A computer doesn't truly know what an object is.
>A computer doesn't truly know what an object is.

We're reliant on semiotics to state what is truly known. Semioticians rely on mathematics for objectivity. If you want to push philosophy up the heirarchy, then you have to restrict it to the design of language and grammar.

>What researchers call 'machine learning' right now is really just advanced statistics

No, "advanced statistics" is the name of the course that they make people take at the private school diploma mill. "Digital Analysis" would be harder to hijack for a buzzword.

It isn't the endgame, because we still have to improve that machine. Even if the machine can improve itself, the notion is stored in a bounded value and subject to a resource constraint.

no one can and it will never happen
it's all bullshit, no matter what any jackass says

in other words it's a dumb shit program that follows the human programmed commands, PERIOD
and calling it dumb shit is giving it too much credit

>Even if the machine can improve itself
it can't, you dumb shit

>When you complain about hype in excess, you are generating hype by contrast.
FO jewboy liar

>KEEP CORRECTING THE RECORD EVERYONE ELSE

I just leave this here.

medium.com/ai³-theory-practice-business/understanding-hintons-capsule-networks-part-ii-how-capsules-work-153b6ade9f66

Haven't seen this picture in a while.

this

You can make the same argument about free will and biological instinct. It's a fallacy because you aren't recognizing the detachment after initial conditions are set.

CAD is one of the earlier applications of ML in semiconductors manufacturing.

That is not an argument, yumadbro?

it's a meme for brainlets who don't actually know how to solve the problem