Does this crap blow anyone else's mind? imo bayesian neural networks are the coolest thing since sliced bread

does this crap blow anyone else's mind? imo bayesian neural networks are the coolest thing since sliced bread.

>Having recovered the latent manifold and assigned it a coordinate system, it becomes trivial to walk from one point to another along the manifold, creatively generating realistic digits all the while

blog.fastforwardlabs.com/post/149329060653/under-the-hood-of-the-variational-autoencoder-in

Other urls found in this thread:

arxiv.org/abs/1609.04802
openai.com/blog/generative-models/
en.wikipedia.org/wiki/Posterior_probability
twitter.com/AnonBabble

no, but im mentally handicapped so i dont rly understand it tbqh

What is this?

it's the process of using neural networks to uncover the latent manifold on which some collection of data lies, once you have said manifold you can do things like hop around on it and sample fake data that closely resembles real data.

for example, here are some generated images of bedrooms from a bedroom manifold.

and here's a paper that improves superresolution techniques by projecting an enhanced image onto a manifold of natural images
arxiv.org/abs/1609.04802

This isn't new though, dimension reduction via manifolds been around for more than a decade. Isomap/SE are well studied and based in theory, rather than black boxes too.

It's never worked this well before to my knowledge.

faking images has never been so easy

dont you have this pic in lower resolution m8 I can still see parts of your point..

Whad. How does knowledge that it's a manifold help? Manifold isn't some magic word, lol, it's just a type of a topological space.

By manifold they just mean some arguably continuous subset of pixel space which is just [math]\mathbb{R}^n[/math] for some large n depending on the width and height of your images. The neural network parameterizes this space via optimization of a generative network and a discriminator network over your dataset so that crap like random noise is not on it but nice pictures of bedrooms are.

>The neural network parameterizes this space via optimization of a generative network and a discriminator network over your dataset so that crap like random noise is not on it
How does it do it?

By having the generative and discriminator networks compete.

Very roughly because I don't fully understand it is that the generator produces some image and the discriminator is fed some combination of these generated images and real images from the dataset and has to decide which are real and which are fake. Fooling the discriminator propagates a positive reward signal through the generator and a negative reward signal through the discriminator and correctly classifying generated images propagates a positive reward signal through the discriminator and a negative reward signal through the generator.

That's for DCGANs. The other common method (VAE) I'm not sure about but you can read about here and other places
openai.com/blog/generative-models/

Looks interesting, how much CS is one supposed to know to implement/understand this?

Very little computer science. Lots of probability and statistics and the ability to write Python programs.

Yeah thank you I think it's posterior probability being used here
en.wikipedia.org/wiki/Posterior_probability

Although I myself don't fully understand wtf it is supposed to mean (I know the usual bayes formula ofc) and the explanations I've googled are just fucked up.

damn, I don't really like probability ):

idk who posted pic related originally but it is possibly the best post on Veeky Forums ever

This result was shown in a few papers circa 08, with pictures of hands. In a 2D space moving in a direction coresponded with either a orientation of hand or more open/closed

autoencoders are nothing new famalam

>If it's not new it's not good

Vanilla autoencoders don't really work. It's the new Bayesian spin on them that get the whole enterprise rolling again.

>implying Bayesian autoencoders are any better than convolutional autoencoders