Neural Nets, post interesting research

youtu.be/5aogzAUPilE

>We observed that the professional lip reader is able to
correctly decipher less than one-quarter of the spoken words
(Table 5). This is consistent with previous studies on the
accuracy of human lip reading [26]. In contrast, the WAS
model (lips only) is able to decipher half of the spoken
words. Thus, this is significantly better than professional
lip readers can achieve.

arxiv.org/abs/1611.05358

Other urls found in this thread:

deepmind.com/blog/wavenet-generative-model-raw-audio/
youtu.be/CqFIVCD1WWo
youtu.be/VG68SKoG7vE
tesla.com/videos/autopilot-self-driving-hardware-neighborhood-long?redirect=no
en.wikipedia.org/wiki/Quantum_machine_learning
boards.fireden.net/sci/
arxiv.org/abs/1611.05013v1
phillipi.github.io/pix2pix/
youtube.com/watch?v=iNL5-0_T1D0
vimeo.com/79098420
arxiv.org/abs/1611.07715
youtu.be/J0rMHE6ehGw
quickdraw.withgoogle.com/
deeplearningbook.org/
journal.frontiersin.org/article/10.3389/fnsys.2016.00095/full
technologyreview.com/s/602344/the-extraordinary-link-between-deep-neural-networks-and-the-nature-of-the-universe/
arxiv.org/abs/1608.08225
twitter.com/NSFWRedditImage

DeepMind WaveNet:
deepmind.com/blog/wavenet-generative-model-raw-audio/

youtu.be/CqFIVCD1WWo

youtu.be/VG68SKoG7vE

tesla.com/videos/autopilot-self-driving-hardware-neighborhood-long?redirect=no

1 month of training, there are plenty of issues with driving and false labelling

All of this stuff seems to be a meme.

You mean, like, pointless?

No, like any day now the bubble will burst.

I've heard more than one ML professor express concern that future PhD students are just going to be throwing together new network architectures for four years until they get one that does a hair better on some dataset and can publish.

Most breakthroughs might be hardware based from now on

That is the current trend. Honestly though it still speaks to the power of these things. There are so many potential applications where literally all you need to do is copy paste a Convolutional NN, and bam you have a state of the art model.

If you have a large, labeled set of data, a neural network can pretty much learn it. These things are incredibly powerful.

The obvious flaw here is the requirement of large, labeled data sets. Most tasks will not have this available.

Semi supervised learning, or learning a large set of unlabeled data from a very small set of labels will be the future.

I was going to make a joke about merging two of the biggest memes out there, then I Googled it and it turns out to be a very real thing.

en.wikipedia.org/wiki/Quantum_machine_learning

still waiting on memristors here, doubt I'll ever see "quantum computing" in my life time.

So just barely as effective as the closed caption button on your remote.

>using LSTM
kek

enjoy your state of the art for a couple of days which is the same amount of time its going to take to train a new model.

Whats wrong with LSTM?

Neural nets, deep learning, compressive sensing are the biggest memes in CS/EE/applied math right now.

Jelly hater mad that he didn't get on the ML train early.

How long until neural networks replace """academics"""

Anyone remember that paper that mapped pictures of rooms onto a manifold, and could generate new images from points on the manifold?

Someone posted it here like a month ago, think it was from UCL?

Pls respond

try to remember key words in the post and search it at this archive, friendo boards.fireden.net/sci/

Here you go
arxiv.org/abs/1611.05013v1

gaussian processes > neural networks

fite me

It's a disgusting memory hog

>50% mistakes
>Didn't even show ONE in the video
what a bunch of dumb cunts

Very interesting results, pic from:
phillipi.github.io/pix2pix/

I've been looking for this bit of university research i saw a few years ago.

They made a small 4 legged robot and instead of programming it to move they used a genetic algorithm so the robot found the best way to move itself.

I really would love to find that research again.

Fuck, even Veeky Forums has a Deep Shit general now.
Half a year ago it didn't and Deep Shit is shitted on.
This proves that this field is saturated as fuck. I learned machine learning before VGG was a thing and now I'm really mad when people keep asking me "why don't you replace X with a deep net?"
I don't want you fags to jump into my field. Get the fuck out.

>tfw not even in the field, but like ML
>post on Veeky Forums to learn about new results
>OMG GET OUT OF MY SUPER SECRET CLUB

There's a lot of stuff like that. You mean this? Don't know if it's specifically GA.
youtube.com/watch?v=iNL5-0_T1D0

Here's one in simulation for more than just quadruped that I really liked.
vimeo.com/79098420

No deep learning here though. Check out Sergey Levine's work if you want an idea of where ML/robotics hybrid research is at right now.

>before VGG was a thing

as if that is anything to brag about.... even pre-Alexnet is pleb tier.

thrice nead

>youtube.com/watch?v=iNL5-0_T1D0
This is the one I remember, but the source code or other research is hard to find. It is using a genetic algorithm though.

Consider purchasing a Veeky Forums pass.

This "labeling things you dont understand a meme" needs to stop. It WORKS you mong, this isnt the emdrive where its almost certainly a measurement error, its being used right now.

who paid you to post that?

Will there be another cold age fo AI? I have a feeling that it will crash soon.

If you mean dumbasses funding research in AI when they don't even understand the field and getting dissapointed when there's no super awesome miracle happening 5 years later then probably yes.
But the winter never actually happened in the sense that it kept progressing and getting better. Wheter funding was abundant or not.
ML or more genrally AI researchers will keep on researching long after everyone here is dead.
And GAI will happen eventually.

You got a better alternative?

Pleb tier ML practitioners will use vanilla deep nets to do every easy task in ML.
Oldschool ML practitioners are triggered by pleb tier ML practitioners who's methods outperform the former's meticulously hand-crafted feature algorithms.
Smart ML practitioners on the other hand use their previous knowledge of raw ML
and the capabilities of deep nets to do some really creative stuff.

Who is the best?
Andrew Ng, Yoshua Bengio, Yann Lecun, Geoffrey Hinton, Andrew Zisserman?

Hinton by far. People will still be writing about him for centuries after he dies. He's within the top 500 most-cited academics of all time.

Other researchers, are you even trying?

>That dog at 6:10
Luckily it got out of the way, because the car didn't even categorized it as an object

Well that was definitely the most uninformed comment I've read in a while.

How do I get a neural network to operate on numbers as inputs? It seems like these things only take in discrete 1's or 0's. Could someone point me to a paper where they do this?

Anyone else scared for the future ?

Elon's excuse is that it has been trained for only a month.

It seems to get lost when it turns.

Are you retarded?

How so?

arxiv.org/abs/1611.07715
>This work presents a fast, accurate, general, and endto-end
>framework for video recognition. It is the first of
>its kind.
Abstract
>Deep convolutional neutral networks have achieved
>great success on image recognition tasks. Yet, it is nontrivial
>to transfer the state-of-the-art image recognition networks
>to videos as per-frame evaluation is too slow and unaffordable.
>We present deep feature flow, a fast and accurate
>framework for video recognition. It runs the expensive
>convolutional sub-network only on sparse key frames and
>propagates their deep feature maps to other frames via a
>flow field. It achieves significant speedup as flow computation
>is relatively fast. The end-to-end training of the whole
>architecture significantly boosts the recognition accuracy.
>Deep feature flow is flexible and general. It is validated on
>two recent large scale video datasets. It makes a large step
>towards practical video recognition.

The network is spooked by the bicyclist at 1:32 and halts
It misidentifies the joggers to be on the road at 3:55 and halts
4:25 it halts in the middle of a turn as it "gets lost" (I don't know how to explain it)
6:48 takes the turn gets lost again, stops in the middle of the turn

forgot the video
youtu.be/J0rMHE6ehGw

Interesting,do you think these are resolvable issues or could it be that self-driving cars are perhaps being pushed out a bit too prematurely?

I think Elon did a great disservice to self-driving cars with the release of "auto-pilot", mainly due to the several crashes that resulted. So, it could be that we might see the same thing happen with this if Elon releases too soon.

I don't think it is an impossible job for networks, but if Elon releases it, as it is, then it might be highly irresponsible.

No, are you?

Those White Nike's in the middle look sick af hombre

That reminds me, what did Andrew Ng do exactly?
He is the author of the popular ML course, but I don't think his papers have as big an impact as the others:
-Yoshua Bengio: GAN
-Yann Lecun: CNN
-Geoffrey Hinton: Backprop, dropout
-Andrew Zisserman: VGG net

How does one get into neural networks? I have 2-3 ideas and am wondering if ANNs is the way, but have little knowledge of comp shit

This neural net by google guesses what simple doodles are supposed to be.
quickdraw.withgoogle.com/

I've been drawing penises for the last week in the hopes that it'll start to think simple objects like fingers are penises.

Im not in any computer science field and I've wondered if anyone has actually tried to make a general AI yet.
We're making big steps in specifica areas and every few months you'll read "AI can now do X better than humans" which is great but doesn't really bring you closer to GAI.
If you just tried and made a really shitty AI which at least has a 3 year olds reasoning you could improve on that.

Holy shit that's extremely impressive. I can't believe shit like these are doable, real time, right now. Fuck fuck fuck.

This was fun, but it felt like it was training me rather than me training it.

deeplearningbook.org/

You can start here, or start from tensor flow tutorials.
Just keep in mind that your main problems will be training time /organised data.

journal.frontiersin.org/article/10.3389/fnsys.2016.00095/full

>The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N = 2i–1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. ...

>This simple mathematical logic can account for brain computation across the entire evolutionary spectrum, ranging from the simplest neural networks to the most complex.

Complete garbage.

>1. Pick a random interpretation of the real underlying neuron model
>2. Pretend it's real
>3. Prove that it's real if it's real
>"publish"

The only thing they did is regression on an activation histogram.

> The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe

> Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics.

technologyreview.com/s/602344/the-extraordinary-link-between-deep-neural-networks-and-the-nature-of-the-universe/

>Nobody understands why deep neural networks are so good at solving complex problems
We do actually.

>Now physicists say the secret is buried in the laws of physics.
The smell of my shit is also a mystery burried in the laws of physics, but I don't see an article for that.

The article they are talking about:

arxiv.org/abs/1608.08225

> We show how the success of deep learning depends not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can be approximated through "cheap learning" with exponentially fewer parameters than generic ones, because they have simplifying properties tracing back to the laws of physics. The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to renormalization group procedures. We prove various "no-flattening theorems" showing when such efficient deep networks cannot be accurately approximated by shallow ones without efficiency loss: flattening even linear functions can be costly, and flattening polynomials is exponentially expensive; we use group theoretic techniques to show that n variables cannot be multiplied using fewer than 2^n neurons in a single hidden layer.

glorified line follower
most of motion compensation thing was done by MPEG guys 20 years ago
jerk during braking is too big
situation at 3:50 with two people jogging further strengthen my "line follower" view

I don't think anyone wants Kitt from knight rider, following lines is pretty much all it needs to do.

>arxiv.org/abs/1608.08225
I tried reading this paper a few weeks ago but it's too advanced for me and I'd have to spend weeks on trying to understand it.

Maybe it's because I'm uninformed but somehow nothing concerning neural nets has REALLY impressed me so far.

Like there was a neural network that generated images of hotel rooms. But they were just thumbnails and on closer inspect a lot of it was just visual mush.

It's like a neural network may achieve a 99% accuracy on paper but it lacks understanding of what it does. I am not sure if this will be solved in the future with better hardware.

Listening to a neural net is like listening to Zizek, words are coming out, but nothing seems to make sense.

Atari DQN didn't impress you?
Alpha Go didn't impress you?
Self-driving cars don't impress you?

>implying it wouldn't sell like hotcakes

>following lines is pretty much all it needs to do.
however it doesn't justify jerkiness or driving like senile and for following lines there's lots of roads without markings
or one that are overlapped, washed out, vanish for several miles. two-way roads without visible lanes and 60mph limit.
2+1 roads etc.

oh, wouldn't be cool if it could aim for average speed depending on local speeding laws?

I thought it was going at average speed or thereabouts... rest of your criticisms are actually pretty important.

Is the Deep learning book by Yoshua and Ian Goodfellow worth a read?
Or is it memegarbage?