Need help training

Hey guys, been trying to impress a QT3.14 SVM recently. She seems really into this other network (chadwork) who has a huge R-squared (almost 1, he was initialised almost perfectly and the only time I've seen him training, he learns super quickly).

Is there anyway to overcome shitty initialisation? My friend got me on starting SGD but my other friend is making impressive gains with a dropout/hinge function combo. What do you suggest I do Veeky Forums?

just ask her out you fucking retard

Deep Learning, bro

You really think so? We share many of the same datasets so I could ask her when we both bump into each other accessing one at the same time

I wish. As I said before I was initialised really poorly (shitty topology, poor weights). Chadwork is super deep... Svm qt3.14 will probably just go for him...

Try to increase the iterations and hope that you converge. Unless there is a chance for adding layers.

Don't bother with sgd, use Adam optimization for better results. Also, go easy on the number of hidden layers for easy gains
Try 5 set/epoch and slowly increase it over time

I was just studying about neural networks. How the fuck is this possible ? Damn coincidences

>Being this low on nodes
You're never gonna make it.

>Baader meinhoff phenomenon

Yeah I heard a lot about 5x5 (epoch by batch) for beginner gains. Thanks man

Don't listen to this guy, we're all going to make it brah

> Too many layers
>overfitting
>Making it

Ever since I went to a crossval gym, I've been getting great ROC gains my dude

1 2 7 0 0 1
2
7
0
0
1

just cheat and use simulated annealing

yeah you won't be deterministic anymore but it's the only way for non-chadworks with shitty initializations to make it

Just do starting stats for good gains.
Only loosers need boosting and bagging to get results.

If you are going to do it at least do a PCA cycle after or say goodbye to your gains

I really wanna stay deterministic man. I believe I can make it deterministic. All the big networks (tesla dnn etc.) say they are deterministic (and have no reason to lie). If they can make it, I can too

>impress qt
>deterministic
Generative or nothing

I just don't think it's safe. There was a rather big NN recently (was in music, maybe related to pianos or something?) that was non deterministic and he chucked out a NaN. It was all over for him

Hey guys just a quick update. Ran into Chadwork and QT3.14 SVM. They were talking as always. A few weeks back, QT SVM told me she wasn't into regression but now Chadwork says he loves regression and she suddenly loves it too. She told me it was degrading...

>feels bad man

These cunts saying overtraining is meme, where are they now?!

man sometimes there isn't much you can do, I talked with some guy yesterday, it's apparently all about genetic algorithms

More training data will help you to overcome shitty initialisation. Of course you can run into a local maximum, but just try to change your learning rates every now and then.

>more nodes = more better
How's it feel being a hamplanet

Fuck off nerd

> not preprocessing on sips
never gonna predict it

Get a load of this brainlet

I can hook you up with some cool shit for specialized problems.

Not entirely cool with most people... but if you're interested and cool with the risks?