How to get started with Deep Learning as fast as possible?

Hey Veeky Forums, what resources do I use to get through Linear Algebra, Machine Learning and the coding necessary for it as soon as possible? My language of choice is Python.
I want to know what exactly do I need to learn (what parts of math and stats), and what resources will explain it most effectively (without too much theory/bull).

Other urls found in this thread:

deeplearningbook.org/
Veeky
twitter.com/SFWRedditGifs

>I want to know what exactly do I need to learn (what parts of math and stats), and what resources will explain it most effectively (without too much theory/bull).

Give up now, you're not cut out for graduate level academics.

>Machine Learning... My language of choice is Python.

This new commandment I give unto you, kill thy self.

Hey I wanna learn some really difficult shit but I don't wanna get into any difficult book! What do you recommend?

I'm more than decent at linear algebra and probability, what else should I know in the field of math?
So you'd suggest...Java? LOL.

Not about difficulty, more about relevance. If quantum mechanics were relevant to the topic I'd spend a night going through the most useful tome on them.

A night? You fucking faggot really don't even know what you are talking about.
If you want to learn this you will need to spend years.

My IQ is well above 65, so our abilities differ. I am asking for advice, not pleb-level ad hominem arguments roaches such as yourself use to rationalize their own shortcomings .

For another user that doesn't give a shit about difficult, where would i need to start?
Assume that I have high school knowledge of physics, maths and computing.

*Difficulty, and how long it'll take me.

Advice? Start form the bottom.
It will take you a ton of time and hard work. You could start with some basic algebra books.

What areas in particular would I need to have a good knowledge of? This includes algebra and other fields.

>most useful

You're being lazy and that's not going to cut it.

>more about relevance

The way ML works is you grab a decades old paper from the Annals of Statistics, throw it at a problem, see if it sticks, and then go back to the beginning. You'll never know "just enough" statistics.

stochastic processes is nice to know. since the direction may go towards learning more probabilistic models, since world is random and noisy. also general statistics is the basis of induction that machine learning is all about.

you need to be able to program and make bigger programs to actually test out the stuff you are doing. since theory on many learning algorithm like deep learning is not established well enough, you should be able to benchmark it properly and visualize your results. scraping data for learning as well.

since this is biologically inspired techniques then neuroscience also could be nice to know a bit. some activation function in the deep neural network is based on biology

at some point the learning has to be turned into some kind of rules for deduction not only induction so logic is nice to know too (first order logic , predicate logic)

linear algebra and calculus is mandatory for anything interesting

Your wasting your time asking this board though since nobody here knows anything about computers and they usually make threads about how much they hate CS etc head over to /g/

deeplearningbook.org/

thank you very much, user :)

>I'm more than decent at linear algebra and probability, what else should I know in the field of math?
Are you fucking high right now?

You sound like you're not a retarded toddler that needs to have its hand held, so start with the prereqs listed here:

Veeky Forums-science.wikia.com/wiki/Computer_Science_and_Engineering#AI.2C_Machine_Learning.2C_and_Computer_Vision


If you really are at HS level the prereqs alone will take anywhere from 2-4 years if you want to do it right.

Can somebody tell me the opposite of the OP?

I am a math grad and want to know the basic idea behind machine learning in the shortest/most elegant/abstract possible way.

That's fine, I start college in a year.

Math student here who has done the coursera machine learning course. If you've had a course on multidimensional real analysis, numerical analysis and linear algebra, you already have most of the requirements to entirely understand the basic concepts.
The main part of machine learning is about optimization. Image you have code (=function) depending on some parameters (=variables), which will return some result (e.g. qualitatively measure the robots performance given the parameters). Adapt the parameters in order to optimize the result.
Translatated into mathematics, this means that we want to optimize a function R^n->R. The basic strategy is to use so-called gradient descent, meaning that you compute the gradient of the function for the given parameters and move into that direction, hence increasing the result.
The most basic models you'll encounter in a typical machine learning course are based on the above principle; e.g. you can apply some convex function "to your model", because this will have the same "local optima parameters".

That's what guys writing PhD theses do. I'm pretty sure actual engineers who write these algorithms IRL don't pick random papers and try to apply them to problems. You're most likely one of those edgy NEETs living in their parents' basements and not doing any applicable work.

Yeah but this is very basic machine learning, Andy Ng's course? I've done that one too. I'm more interested in the heavier deep belief networks, is there a list of resources that takes me quickly to applying those? Kind of like neural nets on steroids.

Not 1 week quickly, but not 5 years quickly either.

the definition that most are using is that machine learning is whatever make a machine learn. but what is learning?

A machine learn by experience E respect to a task T by performance P, if its performance P over task T gets better with E. (Mitchell)

So thats all your constraints and there is wide variety of techniques and strategies to make this happen. some of them have strong mathematically foundation about how to cluster concepts in the most optimal way like support vector machines. other is more biologically inspired where nobody really knows for sure why they perform better. Some part of machine learning is also focusing on develop this general learning theory which is close to statistics. other parts is just focusing on making things work better and win competitions.

Yes it was Andy Ng's course. Sure that's the most basic version; I haven't seen any more advanced models, because I didn't care enough.

Actual engineers don't use fucking ML, they use at least semi-empirical models.

>Actual engineers don't use fucking ML
you just went full retard son

ok if you guys thinks Andy Ng course was so basic explain to me the EM algorithm pls. Since i will get no responses, we all know that you guys didn't really watch it. sci as usual is all shallow , your knowledge have no depths.

also its kind of important point of the thread since its like people are looking for some theory that is not there. this Andy Ng course is not basic its the real deal if you actually understand him.