Any machine learning fags? I got a question

Any machine learning fags? I got a question.

Other urls found in this thread:

reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/
twitter.com/NSFWRedditGif

Shoot dude

Can't, violates Asimov's laws.

Huh?

LARPfag is pretending to be an Ai

Also, I'm taking neural networks and fuzzy logic next year, I know some stuff, but not much.

Huh?

Explain

Hmm might not be enough. You dont know variational free energy do you?

kek

Doing a masters in AI atm. Ask away son

You heard of the "bits-back" argument? I read papers but didnt get it

Also wonder if u read geoff hintons free energy papers?

Doing a masters in AI atm. Ask away son

I haven't actually. Just looking at it now, this seems interesting. Information theoretic approaches are becoming more popular (about damn time) as people try to make sense of deep learning.

No, hintons work is very extensive and something that i should really read up on.

Was just wondering if this principle is proved enough so that you can apply it to any system modelling its environment (i.e. the brain) without needing to empirically prove it since it was mathematically proved that free energy is minimised when marginal likelihood is maximised. All you would need to prove is thata neuronal representations are latent causes for some sensory evidence and that the brain is a bayesian machine (a very popular theory)

I'm actually thinking of topics for me thesis these days. I was considering something with reinforcement learning since I'd like to learn more about it. You gave me some ideas on other things to look up too

And i mean free energy not bits back.

Interesting ideas. Are you familiar with generative/latent variable models? I have to admit my knowledge of AI is mostly very recent work (deep nets and old school methods applied to deep nets like variational inference, etc). Calling something to be the cause of some data is hard though, you need causality tests for that. I was studying causality a bit a while back.

If u thinking about reinforcement learning, maybe look up successor representarions by peter dayan. It might be bog in the future since it seems to simplify certain problems in neuroscience and theres some empirical evidence of its validity in the hippocampus. The idea is 25 years old but only in the last year has it gotten traction especially from empirical neuroscience but thats literally one or two studies

What did u study about causality (obvi briefly)

I only know about generative models through neuroscience and a handful of machine learning papers. I think i said causes wrong because i just mean latent variables. In the free energy idea these latent variables are generally approximate anyway. The idea is just that your model parameters (neuron example?) Can recreate the observable data.

I'll look up all these 3 thanks. I've sort of given up hope for theoretic grounding of machine learning in Neuroscience or said fields. I had expected more of such grounding when i entered the field. Now i just take it as it comes, and theory Is icing on the cake


We studied causality in the context of probabilistic inference, pearl's calculus, some basic algorithms such as backdoor criterion, and some stuff about Simpsons paradox

Thats the thing. I dont know if we can model exactly how be brain works but we can certainly enforce constraints on what the brain needs to do to be a brain. I believe free energy can do this. Hinton has a set of papers on autoencoders and expectation maximisation in the early 90s. You should look. Also. Loom for an ama on reddit by hinton. He says some good shit you might find interesting

Yeah exactly. So, information theoretic metrics (not necessarily a mathematical "metric") are being used as objective functions (training criterion, or loss functions) in several influential models like the variational autoencoder (vae) and generative adversarial networks. Basically any autoencoder learns a latent representation that can reproduce the data. The vae explicitly learns a gaussian distribution from which you can sample to generate data. Its really cool

What do u wanna do when u graduate then?

Not sure desu. Currently thinking about it. Not too worried about getting a job, but pretty concerned if i can get a job that would get me up in the morning. Haven't really found my calling. Also considering a PhD. Its a hard choice since my math background is shit, its something i am trying to improve

Lol didn't actually mean to say desu but it worked out nicely

What's your background? Neuroscience?

What im really interested in is that there seems to be a mathematical proof so you can describe any machine trying to model its environment iteratively as maximising a free energy function. Whats also neat about the function is that it was inspired by physics free energy. I wonder if information and entropy can be united to some extent since they are mathematically identical.

Well im actually doing a psychology degree but my dissertation is neuroscience concerning how we can use animals to develop psychiatric drugs for executive dysfunction which is more neuroscience stuff.

I just find theoretical stuff about neurscience or machine learning interesting though.

It is interesting for sure, no doubt. I don't have a background in more theoretic stuff because ai is really about getting results, which ends up being a hacky engineering solution sometimes. There are proofs that neural nets with hidden layers are universal density estimators but they weren't actually used extensively until a scalable algorithm (backpropagation) was popularized in the late 90s (if i remember correctly)

You cant be that shit at math. On that ama thread i told u about, hinton says that hes not great with math. He said all his ideas come up without using math. He just uses math to prove it. Like with the free energy thing. He was inspired by helmholtz free energy and made he idea. Then it was actually his collaborator that did most of the math though he made the idea.

And hinton was the one who unvented backpropogation so

And hinton did a psychology degree as undergrad too!

Ill link it when i get home

Hah so true. Its funny someone also told me the same thing today. It makes sense. I feel my "shit math" might just be a mental block. Actually in ai you dont need formal crazy maths. You just need to be able to use it to achieve what you want

Yeah please do!

Just making sure i meant the ama thread

Thats what i think. I think the math part is alot about creativity rather than being a savant

What college u at?

University of Amsterdam
Oh yes I've seen the ama a while back but haven't read it extensively

reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/

here big boy

O here we go

what?

Thanks

Stop writing like a monkey. In the end of the sentences, you have points. You don't say 'u', it's 'you'. Writing correctly is part of being a good scientist, because you can express yourself in such a way that is pleasing to read. This thread looks like the comments on a instagram photo. If you have any respect for yourself, act decent, write like you are not a nigger.

did you get your nips in a knot because you don't understand the topic being discussed?

Can you reduce the timestep lenght of a n-dimensional vector sequence using HMM?
LSTMs can't handle that long of a sequence
>inb4 use dilated convolutions

op here, who wrote with a u

I'm a major in CS, and I'm well aware about this topic. But it is enraging to see /b/ tier communication on Veeky Forums. And always start your sentences with a capital letter. Stupid kid.

fuck off

contribute dont cunt

If I'm studying undergrad computer engineering, what would it take to go to grad school for AI research if I'm mainly focused on hardware right now?

go to adv unless you gonna contribute on free energy yo

>tfw have neither the time nor patience to apply a naive bayes filter to Veeky Forums threads.
Anyone consider doing this?
Or maybe UL clustering so that we can see the underlying structure of the shitposting on this board?

Either a well planned troll or just your average Veeky Forums user.
A damn shame either way.

>cs undergrad on his high horse spraying inflammatory language and being combative because he took an AI class

>People don't act the way I want them to on an anime messageboard so I'll call them niggers and express my rage!

>prescriptive linguists
who cares if some old fart says to never split infinitives or that prepositions are bad to end sentences with ?

a masters in data science is the same as all this shit right? i know there are "masters in machine learning" but i feel like that's too specialized

I can tell that none of you fuckers in this thread don't know what the fuck I'm talking about, since there has been a lot of shitposting and not a single answer to my question.

ML is not data science, although many data scientists like to say they do ML to prop themselves up a bit.

>LSTMs can't handle that long of a sequence
This depends entirely on the sequence length you are dealing with. You'll have to give hard numbers to get any advice friendo.

part of the problem is you never asked the question in the OP, you went full retard and asked whether anyone would like to answer some unspecified question.

It is a waste of everyones time to ask whether you can ask so that people have to spend time prying to get you to actually ask what you are wondering about.

im op. this guy ironically asked a very specific question. i didnt and got fulfilled

idiot

You fuck off, illiterate faggot. People here at least try to write correctly.