does this crap blow anyone else's mind? imo bayesian neural networks are the coolest thing since sliced bread.
>Having recovered the latent manifold and assigned it a coordinate system, it becomes trivial to walk from one point to another along the manifold, creatively generating realistic digits all the while
no, but im mentally handicapped so i dont rly understand it tbqh
Jacob Rodriguez
What is this?
Isaac Jones
it's the process of using neural networks to uncover the latent manifold on which some collection of data lies, once you have said manifold you can do things like hop around on it and sample fake data that closely resembles real data.
for example, here are some generated images of bedrooms from a bedroom manifold.
and here's a paper that improves superresolution techniques by projecting an enhanced image onto a manifold of natural images arxiv.org/abs/1609.04802
Hunter Nelson
This isn't new though, dimension reduction via manifolds been around for more than a decade. Isomap/SE are well studied and based in theory, rather than black boxes too.
Dylan Rivera
It's never worked this well before to my knowledge.
Adrian Wright
faking images has never been so easy
David Brooks
dont you have this pic in lower resolution m8 I can still see parts of your point..
Ayden Brown
Whad. How does knowledge that it's a manifold help? Manifold isn't some magic word, lol, it's just a type of a topological space.
Jace Reyes
By manifold they just mean some arguably continuous subset of pixel space which is just [math]\mathbb{R}^n[/math] for some large n depending on the width and height of your images. The neural network parameterizes this space via optimization of a generative network and a discriminator network over your dataset so that crap like random noise is not on it but nice pictures of bedrooms are.
Isaac Powell
>The neural network parameterizes this space via optimization of a generative network and a discriminator network over your dataset so that crap like random noise is not on it How does it do it?
Luis Sanchez
By having the generative and discriminator networks compete.
Very roughly because I don't fully understand it is that the generator produces some image and the discriminator is fed some combination of these generated images and real images from the dataset and has to decide which are real and which are fake. Fooling the discriminator propagates a positive reward signal through the generator and a negative reward signal through the discriminator and correctly classifying generated images propagates a positive reward signal through the discriminator and a negative reward signal through the generator.
That's for DCGANs. The other common method (VAE) I'm not sure about but you can read about here and other places openai.com/blog/generative-models/
Evan Johnson
Looks interesting, how much CS is one supposed to know to implement/understand this?
Daniel Collins
Very little computer science. Lots of probability and statistics and the ability to write Python programs.
Although I myself don't fully understand wtf it is supposed to mean (I know the usual bayes formula ofc) and the explanations I've googled are just fucked up.
Bentley Flores
damn, I don't really like probability ):
James Thompson
idk who posted pic related originally but it is possibly the best post on Veeky Forums ever
Lincoln Edwards
This result was shown in a few papers circa 08, with pictures of hands. In a 2D space moving in a direction coresponded with either a orientation of hand or more open/closed
Isaac Perez
autoencoders are nothing new famalam
Hunter Anderson
>If it's not new it's not good
Luke Morgan
Vanilla autoencoders don't really work. It's the new Bayesian spin on them that get the whole enterprise rolling again.
Benjamin Green
>implying Bayesian autoencoders are any better than convolutional autoencoders