Who /machinelearning/ here?

Who /machinelearning/ here?

Other urls found in this thread:

arxiv.org/pdf/1703.01988.pdf
papers.nips.cc/paper/3311-hippocampal-contributions-to-control-the-third-way.pdf
twitter.com/AnonBabble

Me.

Honestly thinking about switching. Its so incredibly competitive. Swamped with asian resume stuffers, very hard to stand out.

Interested in information geometry of machine learning algorithm (kernel methods mostly)

Anyone know about geometry of learning?

The machine learning boom is a desperate attempt to generate value from "Big Data" which everyone is extremely invested in, we're in for a huge bubble burst when naive optimism starts declining.

>tfw you are interested in computational models of cognition and your field is being overrun by silicon valley dudebros

>computational models of cognition
How's that working out

Tell us more about your insightful contrarian opinions.

Yes, but with focus on agi/asi. Everything else is basically "just" statistics and searching. The big data hype is not to our good.

Better than expected.

Here is state of the art in the field
arxiv.org/pdf/1703.01988.pdf

>and your field is being overrun by silicon valley dudebros
iktf bro. Some CS fields are completely devout of any discussion and reduced to DUDE APPLICATIONS DUDE JUST STACK MORE LAYER MUH GOOGLE MUH FACEBOOK even on graduate level

Whose cognition is this modelling, computationally?

Broadly speaking mammalian.

Its pretty much an implementation of this work
papers.nips.cc/paper/3311-hippocampal-contributions-to-control-the-third-way.pdf

Reinforcement learning is nearly at the point where computational neural science is being implemented into functional agents. Its a fascinating time.

The connection to actual mammals seems tenuous at best. It seems they've only drawn loose inspiration from data that suggested "the transfer of control from hippocampal to striatal structures over the course of learning". From this they extrapolate an algorithm and test it on MDPs which are awful models if you ask me.

holy shit this is laughable
the bubble is going to pop harder and more painfully than a genital wart

T. Unemployed physics majors mad that cs majors are earning 3x as much without taking complex differential analysis theory

machine learning is literally statistics


"agi/asi" is not research, it's popsci

ML Engineer here.
The weakest point of deep learning is the need for huge amount of labeled data when a human brain needs only a few samples to make a somewhat functional model.

Semi supervised learning aims to fix this but I don't think it is the right track, or only partially. Mostly because the assumption that similar labels have similar labels is not always true.

The AI we build today with NN is not actual Artificial Intelligence, it is more of an Artificial Intuition. It is only based on experience, can only be explained by introspection (deconvolution and parameter inspections)

I think the key to actual AI is to use small neural networks for easy and generic subtasks together with symbolic intelligence algorithms for more abstract reasoning.

Not everything is differentiable, and not everything is based on experience, deep learning only approaches are doomed to fail, even with all the hype.

Maybe deep RL could work all the way up to actual cognition but it would require shitloads of processing power and time to be effective.

>similar labels have similar labels
I meant
>similar samples have similar labels

>thinks you need a CS degree to do programming

top kek

Dude, I agree with what you are saying in the thread, but DeepMind is a fucking joke.
They literally manage to publish a lot because muh Google and because NIPS/ICML reviewers are scared of blowing the money away.

What confused me about machine learning, is how there is not some kind of trash in trash out limit. Like you can get a NN that works really well at a classification task, but all it does is use the information that you have given it. It is quicker and more reliable than a human, but there is no gain over what would in theory be possible with a human because you have given it all the information it knows.

Will it ever be possible for a computer to think creatively and use past knowledge to figure things out?

me

hate it desu

The outstanding results they got in the mentionned article suggest that you are wrong.
Deep Mind are really good at what they do.

I don't know that analogy would work in classification. Like should the network generate an image that it feels is an edge case by itself and ask for a label from the human?

And it also depends on your precise definition of creativity. If you accept that it could mean "extrapolation between known data points", then generative adversarial networks fit the bill.

The human brain has INSANE amounts of data to learn from. Whenever we see something, we don't just have one static picture, we usually have a full moving series of pictures that is even three dimensional, and we have complete models to cancel effects of lighting, perspective etc out. Our brains are capable of extracting information very efficiently to get the most out of it. That is mostly up to complexity, so you can't really compare the human brain to current deep learning efforts.

Anyway, you are of course right, figuring out how to upscale and combine these networks in the future is an exciting question, but calling deep learning doomed to fail is a ridiculous statement, especially considering how general the approach is.

That's true, even the most simple scene or situation provides a person with a vast amount of information. This is why everyone knows the fact that reading about something, following a guide or reading a book about some types of situation doesn't compare to actually experiencing them. This is why learning from mistakes is so much easier and sometimes more effective.

Let's think about dates, for example. You could read dozens of romantic novels and poems but they will never communicate the vast amount of sensory information just one date or relationship communicates. Your parents may give you hundreds of tips and advice but sometimes you need to have some bad experiences in order to really learn and adapt to the concept of relationships. There's such a huge amount of information, from information about the persons surrounding you during a date, the place right next to you, the food you choose, the taste it has, the atmosphere, the other person's perfume, the way they dressed, the way the fork reflexts light, the way the environment changes your mood which in turn changes the way you adapt to the environment which in turn changes the situation, which in turn... you know, you get the point.

So it makes sense that a neural net, even if it will be able to perfectly simulate a human's brain it will need huge amounts of information in order to be actually useful.

As someone who spent 6 months reading DeepMind papers at the beginning of my thesis, I can assure you that when you dig deep enough you clearly start to see the little tweaks and white lies that make their results MUCH less impressive.
The most banal example of this is their computation times: in 2012 everyone was amazed at how they solved Atari games with DQL, but you look at the numbers and... 250 fucking million frames for a total of 10 days of Tensorflow runtime on a K40C.
Duh-fucking-uh it works (not even that actually, they had to keep an epsilon greater than 0 in evaluation because their deterministic policy literally sucked).

6 months later they coin the term "catastrophic forgetting" and solve that too. Well gee thanks, maybe if they hadn't learned catastrophically in the first place we could have saved a slot at NIPS for a real research paper.

I hate how people ride Google's dick just for the sake of it, even if they literally added zero value to ML theory beside "durrr we got datacenters the size of Texas and money to steal people from Stanford XDXD".

Tl;Dr: Deepmind sucks and I hate the current DL community

AlphaGo is pretty cool though

no, why is it interesting?

If you asked any machine learning expert about whether Atari games could be learned, they probably would have given you a laundry list of why it can't be done yet.

yet deep mind did it. hate all you want, they are doing shit no one has ever done before. If its all so trivial, why aren't you publishing that shit yourself, and getting a 6 figure salary?

So at first glance i see all the buzzwords of deep and memory
So LSTMs training on labels and being combined?

>Was the only set of upper division classes I wanted to take
>They're only offered in Odd Years
>I graduate next year
>I'll only have finished the prereqs this fall
Fuck me sideways

Not its not training on labels. This is reinforcement learning.

Also can you not even look at the damn pictures? They created a fully differentiable memory system.

Memory augmented RNNs are literally cutting edge shit. Why are you sperglords pretending this stuff is mundane?

Why are you talking out of your ass?

Also... there are no LSTMs in that work. They reference work that uses LSTMs but they don't use them in their model

Reward/labels, same shit man. If you are updating an NN as a value function. The labels for the net are the rewards you pass through the value function you want your net to learn from whatever meme bellman equation.
Its late mane
Yeah, my mistake

...

>subsea engineer that is about to go back to university for a masters in aerospace
>intrigued about machine learning
>want to make a neural network to predict hydrate formation in subsea equipment
Is it worth the effort? I'm mediocre at python and matlab and novice at C++. I have access to a lot of data.

ML seems interesting. Could I work in the field with a bachelors in EE/linguistics?

I'm still an undergrad, but am starting to think about my grad degree options so I was curious if it was strictly hardline CS.

I am and just wrote this thoughts? I want to learn how to teach basic deep learning to anyone cuz its fun explaining what I do to people but it usually takes a bit of backstory

check out GANs, they were super big at NIPS this year. basically the model making up fake examples and testing itself to get better

Similar question, can a math or physics BS land a grad school position in ML?

There's libraries like Keras for python which make neural networks extremely accessible to make. Combined with a mind-boggling number of tutorials lying around the net, you could probably pick it up reasonably quick, or at least get an idea of what you're in for.

Be warned though, training a network to get nice results can take a very long time depending on the problem and the hardware you have - you could always look into renting AWS's GPU instances if necessary.

Atari games had already been solved before deepmind did it with deep learning tho, do you even know what you're talking about?

me too user, headed into uni as freshman next fall at a top research university. Is it too late?

>catastrophic forgetting
>2012
Dude this shit has been solved since 1999

As long as you keep the network simple (no CNN or RNN), neural networks take a very short time to train actually

I took a course on it at a conference last Tuesday.

Dozed off for most of it. It's interesting stuff but goddamn is it boring to hear about. I'd rather get some practical experience than hear about how this classifier can distinguish groups slightly better than this one.

>Reward/labels, same shit man
Not at all.

Using reward is a much harder problem.

not true
Link?

I honestly don't believe you're an ML Engineer from the shallowness of your post.

I'm not aware of any earlier learning algorithm so general that it can learn to play many different atari games just from raw pixels and knowledge of the score.

You're exaggerating, but I agree that DeepMind's amazing results are tied to their unmatched infrastructure.

Never heard of it before your post. A reviewer of "Information Geometry and Its Applications" on amazon suggested that the field doesn't really add anything usable to ML knowledge.

Do you have ideas on what you would be interested in exploring?

>DeepMind's amazing results are tied to their unmatched infrastructure.

True. But also remember DeepMind was only founded in 2011. Deep Learning/Neural Nets were considered fringe research, borderline pseudoscience, before then, so obviously no one was throwing massive resources at them.

Look into the manifold hypothesis

ITT: Engineering students who still use linear regressions and KNN in MATLAB

u jelly that KNN already beats you are sophisticated learning algorithms on any task anybody cares about and is only an import statement away from any sperg running python?

>inb4 my math will totally be the basis for breakthrough ML techniques some day!

kek, all we actually need is bigger datasets. fuck you're math

SVM>KNN

>fuck you're math

In your explanation you seem to miss the fact that if you simply stack linear neurons you'll never be able to learn nonlinear boundaries.
Also this is not really how brains work

>Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning
environment: An evaluation platform for general agents. Journal of Artificial Intelligence
Research, 47:253–279, 2013.

Did you actually read any of that? They get below human results on every game they test.

That's just a platform to test agents...

DeepMind learned to play atari games purely from the pixels of that game

>after years of studying math and codemonkeying finally get a ML job
>it's mostly data cleaning and plugging in ready made packages

should have become a barista

Nah I just copy pasted the first reference in the DQN paper.
Here, faggot:
Matthew Hausknecht, Risto Miikkulainen, and Peter Stone. A neuro-evolution approach to
general atari game playing. 2013.
Almost human performance, your initial claim about the state of the art before your sugar daddies saved the world is unfounded.
Don't worry pal, you can still suck their big overmarketed dick for free.

no, clearly you are not good enough to either become a research in this field or work at a reputable company that does real ML instead of what you described.

>he thinks human cognition has anything to do with machine learning
LMFAO. Better drop out while you can, my friend.
THIS

>>he thinks human cognition has anything to do with machine learning

I don't. But the dude bros are coming anyway because they think what I do is machine learning

>he thinks human cognition has anything to do with machine learning
But it does. How could an RNN based function generator be considered anything but machine learning? We just don't understand human cognition yet.

Redpill me on ML anons, any way to cash on that shit?.

It is. That paper is machine learning. Not sure where you got the impression that the authors were trying to create a computational model of the cognition of anything. At best they are creating a computational model of certain classes of human behavior, but more to the point they are simply trying to create game-playing agents.

If a technique came along tomorrow that was proven definitively to NOT be what happens inside of minds but nevertheless improved their agent's performance, they would use it, without question.

I don't understand this post. Are you implying that human cognition is someone analogous to "an RNN based function generator"?

What language should I program in and what resources are there if I just want to identify and track a specific type of object (with pictures or video)? I know nothing about machine learning but I know java, python, some C++, and some (but probably not enough) matlab.

You are getting hung up on the word cognition. It doesn't mean what you think it does, particularly in this context

You are not ascribing enough to it. Your definition of cognition seems to be "complicated behaviors".

Seriously though, what's the supply and demand like for ML work?

Seems like too many people and too much hype resulting in

I've heard there is still a large need for ML engineers

Python

Yes, theres neurons with recurrent connections that generate functions for motor control. Do we have a better description of a brain?

No explanation is better than a non-explanation.

Dont know what there is to explain. Literally just stated what the brain is made of and its purpose and related it to an analogous structure that is under the domain of machine learning. What happens in the hidden layers of that network is what is refereed to as "cognition"

>How could an atom-based quantum-wave collapser be considered anything but particle physics?
>Literally just stated what the brain is made of and its purpose and related it to an analogous structure that is under the domain of particle physics. What happens in the solutions to the many-bodied Schrodinger equation is what is refereed to as "cognition".
See how unhelpful that is?

Machine learning will never match the human brain until they stop trying to do everything with backprop and actually study the cortex.

>Abandon something that works and study this thing we have no hope of understanding

with such a defeatist attitude I'm surprised you haven't killed yourself

you mean actually using the value as a label and inputting it into a estimator, or finding the proper value function based on reward?

If the former, I haven't seen a value function that doesn't end up as V(x)=r where you couldn't model V(x) with an estimator labeling x with r.

With your head so far up your own ass I'm surprised you can still use the internet

except machine learning is used in every facet of our technology, not just about data mining

Why would we want to emulate the human brain? It has inherent restrictions and convolution that we can circumvent entirely.

>It has inherent restrictions
It also has inherent advantages.

A brain-like system wouldn't replace all other ML, but it would do many of the things that people are struggling to get ML to work on.