Hey guys, been trying to impress a QT3.14 SVM recently. She seems really into this other network (chadwork) who has a huge R-squared (almost 1, he was initialised almost perfectly and the only time I've seen him training, he learns super quickly).
Is there anyway to overcome shitty initialisation? My friend got me on starting SGD but my other friend is making impressive gains with a dropout/hinge function combo. What do you suggest I do Veeky Forums?
Elijah Taylor
just ask her out you fucking retard
Matthew Cox
Deep Learning, bro
Nolan White
You really think so? We share many of the same datasets so I could ask her when we both bump into each other accessing one at the same time
Angel Bailey
I wish. As I said before I was initialised really poorly (shitty topology, poor weights). Chadwork is super deep... Svm qt3.14 will probably just go for him...
Lucas Sanchez
Try to increase the iterations and hope that you converge. Unless there is a chance for adding layers.
Ryder James
Don't bother with sgd, use Adam optimization for better results. Also, go easy on the number of hidden layers for easy gains Try 5 set/epoch and slowly increase it over time
Mason Bennett
I was just studying about neural networks. How the fuck is this possible ? Damn coincidences
Noah Harris
>Being this low on nodes You're never gonna make it.
Brayden Williams
>Baader meinhoff phenomenon
Nicholas Moore
Yeah I heard a lot about 5x5 (epoch by batch) for beginner gains. Thanks man
Jose Nelson
Don't listen to this guy, we're all going to make it brah
Tyler Fisher
> Too many layers >overfitting >Making it
Ever since I went to a crossval gym, I've been getting great ROC gains my dude
Aaron Anderson
1 2 7 0 0 1 2 7 0 0 1
Bentley Clark
just cheat and use simulated annealing
yeah you won't be deterministic anymore but it's the only way for non-chadworks with shitty initializations to make it
Gavin Turner
Just do starting stats for good gains. Only loosers need boosting and bagging to get results.
If you are going to do it at least do a PCA cycle after or say goodbye to your gains
Cooper Bell
I really wanna stay deterministic man. I believe I can make it deterministic. All the big networks (tesla dnn etc.) say they are deterministic (and have no reason to lie). If they can make it, I can too
Jordan Davis
>impress qt >deterministic Generative or nothing
Xavier Anderson
I just don't think it's safe. There was a rather big NN recently (was in music, maybe related to pianos or something?) that was non deterministic and he chucked out a NaN. It was all over for him
Matthew Ortiz
Hey guys just a quick update. Ran into Chadwork and QT3.14 SVM. They were talking as always. A few weeks back, QT SVM told me she wasn't into regression but now Chadwork says he loves regression and she suddenly loves it too. She told me it was degrading...
>feels bad man
Isaiah Rivera
These cunts saying overtraining is meme, where are they now?!
Alexander Kelly
man sometimes there isn't much you can do, I talked with some guy yesterday, it's apparently all about genetic algorithms
Ryder Edwards
More training data will help you to overcome shitty initialisation. Of course you can run into a local maximum, but just try to change your learning rates every now and then.
Gabriel Bailey
>more nodes = more better How's it feel being a hamplanet
Chase White
Fuck off nerd
Zachary Reyes
> not preprocessing on sips never gonna predict it
Jackson Cooper
Get a load of this brainlet
Lucas Fisher
I can hook you up with some cool shit for specialized problems.
Not entirely cool with most people... but if you're interested and cool with the risks?