Are neural networks just abstract curve fitting tools?
Say you have 100 paintings of an artist. All done in a very similar, realistic, style. With the processing power to have a huge neural network process high-resolution images could you theoretically train a network to produce images that will look like actual paintings and not just approximated mush?
Or is this something that would require a super intelligent AI?
That was what I was talking about. All these images made by that neural network are just approximations which show no understanding of the style. It's just mush.
So I wonder if this is simply a thing of processing power or of implementation. Will you be able to throw everything at a huge neural network in the far future or do we need to think of something more clever?
It's also the same thing as asking, at what point does a biological brain become complex enough to understand style.
Jackson Hall
>could you theoretically train a network to produce images that will look like actual paintings
Certainly. Of course, the simple convolutional architecture that's proven so successful for image recognition tasks and related tasks like style transfer wouldn't be sufficient; convnets essentially just recognize "texture", although that's a very powerful thing to recognize.
You'd probably want a recurrent, attentional model to be able to parse and generate the structure of the image, for one thing.
Also, neural networks are currently very data-inefficient; you might have a hard time extracting a truly coherent "style" from a mere 100 examples using neural nets alone. This is an active area of research, but it really is unsolved for now. NN's aren't magic, they're just really powerful if you've got a really big dataset to feed them with.
Anthony Ramirez
Also, note that while feedforward neural networks are universal function approximators, (making them, yes, extremely flexible "curve fitting tools" when combined with backprop and SGD), recurrent networks are universal *algorithm* approximators.
Colton Long
Would you say using a neural network to do curve fitting would be more precise than common methods like kernel regression etc?
Connor Carter
Deep network, yes. Normal network not really, if you have a lot of data then the result will be almost exactly the same (there's a graph somewhere).
>That was what I was talking about. All these images made by that neural network are just approximations which show no understanding of the style. It's just mush.
I'm a painter and disagree, they're clearly picking up and applying aspects of style. They can achieve effects vimeo.com/175540110 which would formerly have been achieved with a bunch of animators working from a shared style guide youtube.com/watch?v=eke5VnpNcNk
Benjamin Jones
deepart.io if someone doesn't know it
Joseph Walker
Any model that makes predictions is necessarily a "curve fitting tool".
Hunter Campbell
Is there some pre-made neural network out there that I can feed images to and see what it generates just for fun?
Colton Cook
Several; Prisma and DeepArt.io are both implementations of the (very cool-looking) Neural Style algorithm. You can also find DeepDream implementations lying around, and a few Neural Doodle implementations.
It's also becoming increasingly common for researchers to share their code through things like GitXiv, so if you've got some Python chops you could build those yourself. Keras and TensorFlow make neural network stuff *really* simple.
Hunter Collins
Don't wreck us m8... M8 please... no m8... You've reckt us...
Has someone already made a program that implements it? I am sorry but I have no knowledge of neural networks so I couldn't implement it myself.
Kayden Wilson
Bump.
Ryder Jones
Not true NNets crush linear models generally even when neuron count is lower and data is of moderate size
Christian Diaz
>Are neural networks just abstract curve fitting tools?
They're curve fitting tools. I don't know what you mean by 'abstract' though. There's nothing particularly special about the curves which neural nets fit.
>With the processing power to have a huge neural network process high-resolution images could you theoretically train a network to produce images that will look like actual paintings and not just approximated mush?
Processing power isn't the only resource. There's also the fact that neural nets are only trained on so many examples. If you had an infinite amount of processing power and every possible example, you wouldn't need a neural net because you could just look at the data set.
The challenge is to find a good curve based on [math]limited[/math] processing power and [math]limited[/math] examples. The best thing we have for that so far, at least in image processing an natural language processing, is the neural net.
>Or is this something that would require a super intelligent AI? You could imagine an AI or neural net which performs astronomically well on the painting task you mentioned. It doesn't mean it knows the first thing about some other task---say, doing your taxes.
>That is what machine learning is, though deep learning is somewhat more complex. Deep learning (which is really just a buzzword researchers use to get funding from know-nothing business types) is still a form of function approximation (curve fitting).
Luis Reed
>Will you be able to throw everything at a huge neural network in the far future or do we need to think of something more clever? Like I've said, in principle you can. In practice, computers aren't speeding gaining performance at the same rate that they used to. However, I remember Geoffrey Hinton saying that modern neural nets have a few orders of magnitude fewer synapses (or was it neurones?) So it'd be his opinion that computational power is at least one of the things researchers need more of.
I personally think the field still needs more clever tricks to make proper use of the computational power we have and will have in the future.
Tyler Powell
>Deep learning (which is really just a buzzword researchers use to get funding from know-nothing business types) is still a form of function approximation (curve fitting). Deep networks require convolutional layers and other shit. They were impossible due to problems with convergence until fairly recent breakthroughs.
I can understand why you think it's a buzzword but you're wrong. They're the same the same principle but they are not the sense in terms of the results or know how (not that deep nets are way more complex than normal nets or anything but there are subtle differences that may be important depending on your situation). Thanks to modern video cards, libraries, and frameworks a person working with them doesn't need to know as much as a researcher but there are still backwards companies out there that have researchers prototyping shit in Python and then engineers reimplementing it in C++ or Java. That shit is costly and retarded.
Joshua King
It's weird when you see two people who know nothing about the technology talking about it with each other as if they do.
John Miller
>Deep networks require convolutional layers and other shit "Deep" networks aren't necessarily convolutional. On top of that, convolutional networks aren't necessarily considered deep (although they usually are).
>They were impossible due to problems with convergence until fairly recent breakthroughs. It really depends on your definition of "deep". (See: a couple paragraphs down.) But LeNet-5 from 1998 was five layers deep. He even used tanh as his activation function instead of ReLU, LReLU, PReLU, ELU or any other -LU you might care to name. And I don't think he even used fancy initialisation tricks like Gloriot's, or layer-wise pre-training.
Anyway, how does this contradict what I said?
>I can understand why you think it's a buzzword but you're wrong There's a couple reasons. 1) "Deep" has no real meaning. There's never been any consensus on how many layers a network requires before it's considered "deep". So it's not a very descriptive term in that way. But it's also not a descriptive term in the sense that, if you never heard someone use the term before, but you still knew a lot about neural nets, you'd have to think about what that person was trying to tell you a lot harder than if (s)he'd said "many-layered neural net". 2) "Learning" is just a re-branding of what they very sensibly, but less appealingly call "optimisation" or "function fitting" in other fields.
>They're the same the same principle but they are not the sense in terms of the results or know how >there are subtle differences that may be important depending on your situation Can you give examples?
I was the guy who posted these:
What did I say wrong, friend?
Sebastian Bell
>LeNet-5 from 1998 was five layers deep. He ...By "he", I mean Yann LeCun.
Benjamin Morales
it's already been done nextrembrandt.com/ An AI learned how to paint like Rembrandt and created a painting based on his style and it's quite accurate
Landon Garcia
it's not with a neural network just some algorithms sry if misleading, but similar to what OP described