>no training material >no supervision >chess (and shogi) mastered in just 4 hours >defeats best chess program (Stockfish, no opening books or endgame tables) >defeats best shogi program (Elmo) after only 2 hours of training
HAHAHHAHAAHHAHHAHAHA, Chess programs are retarded compared to neural nets. Human programmers ABSOLUTELY BTFO.
Just a fad like VR. If anyone here is seriously thinking we're gonna get anything even close to AGI in the next century this board is retarded.
Jonathan Lopez
I was planning to build a NN that would play battleships to put on my resume. You think DeepMind would hire me?
Robert Davis
Who is talking about AGI other than you? We are talking about work that would traditionally require programmers, but it is now possible do without.
You people can't even define 'general' in AGI, so why do you feel so smug about typing dumb shit?
Charles James
Get a PhD from universities specialising in NNs and you'll be guaranteed a job there.
Sebastian Peterson
because high level chess is just calculation, whereas humans rely on pattern recognition humans have very limited working memory, whereas computers have nearly infinite amounts compared to humans
Noah Scott
just get a PhD >hurr durr brainlet
Jason Taylor
Also, they don't just make board game AI. If that was the case Elon wouldn't be bitching about it constantly.
Matthew Hill
Wrong! AlphaZero only played ~44 million games compared to nearly a billion games played by Stockfish, yet Stockfish was defeated.
Carter Russell
Then don't get a fucking PhD, you cock munching cunt. If you are asking dumb questions, like how do I get opportunities at Deepmind, then you will obviously get the most straightforward possible way; Also, it's already clear that you are not gifted in the slightest.
Anthony Evans
>this board is retarded You're starting to get a clue.
Hudson Wilson
This is my only post ITT. But I was asking if that would suffice in lieu of a PhD.
Nathan Robinson
The other relatively well known way is to make some impact at Kaggle, good luck.
Adrian Bailey
>Kaggle Nice. I didn't know about this.
Samuel Morgan
>human programmers btfo >is made by human Are you even trying?
Cameron Fisher
Scientifically speaking, why don't they train the AlphaZero AI in programming and see what it comes up with? I can already the imagine the headlines
"AlphaZero AI builds AI that beats itself in chess after only teaching it for 3 hours"
Brandon Foster
You don't get it. It can learn many tasks and not just Chess. In fact, it was not designed with this in mind thus it could master Shogi as well. The programmers who worked on Stockfish were well versed in Chess, and many of them are grand masters. On and another note, Google has itself acknowledged that NNs are being employed to design other nets (AutoML). It's all a matter of time.
Mason Moore
Funny how programming paradigms come full circle given enough time.
I'm with the actual AI professor they quoted in the article. Call me when they master continuous state and action spaces with noisy channels.
Isaiah Rogers
the state of a go board has a fairly natural 2D, encoding, -1 for black, 1 for white, and 0 for an empty space. i wonder how they do it for chess. perhaps a 3D structure.
it's actually much easier for chess since there's no "pass" move to deal with.
Nathaniel Barnes
> why don't they train the AlphaZero AI in programming No operation-able success condition.
Jack Ross
once you very precisely specify the inputs and outputs you want, the program is virtually already done.
Landon Wright
>No operation-able success condition. What? It is obvious. The success condition is having it lose against its own AI.
It would build an AI and then play against it. If it wins against its AI then it failed and has to build a new AI. Repeat until it builds Skynet.
Logan Ward
continuous state space is still a challenge
Neural nets tend to be very noise resistant however, so I don't think that will be a problem really.
Brandon Wright
>No operation-able success condition. I don't see why you can't use something like "Write a program that when executed, plays Go". Objective function can simply be Go winrate, plus programming-related parameters if you want (e.g. time+memory needed to compile/execute).
Though I suspect it'll probably end up building something like a million-state Turing machine.
Ryan Williams
>It would build an AI and then play against it. Play _what_ against it? (Although I doubt you were being serious with that post.)
Liam Robinson
Chess, Go, whatever.
James Lewis
Then you will only build a NN that learns how to build a program that solves that whatever task, not one that can program. Why would you even do that when you can have the NN learn to solve that task directly?
Ian Parker
>Neural networks >self modifying code You have no idea what you're talking about
Logan Peterson
>Neural nets tend to be very noise resistant however Actually, noise seems to be essential to the learning algorithm. Deep neural networks seem to work by compressing the available information in a lossy way. Here, watch this: youtube.com/watch?v=bLqJHjXihK8
Go was easier to model, since you can simply place stones on empty points of a Go board, and it has eight-way symmetry.
Chess positions are asymmetric due to pawns and castling, so training was eight times harder. Plus it needs more layers for the multiple piece types, promotions, castling, and side rules like 3-fold repetition and 50-move count.
Shogi was even harder to model due to more piece types and prisoners/drops.
Joseph Green
even more astonishing, it beat Stockfish using Monte Carlo Tree Search, running a thousand times slower than Stockfish's 64-thread optimized alpha-beta search. Successful new search methods are unheard of in chess programming. The NN eval must be extremely smart to make up for that much speed loss in the search!
I hope DeepMind doesn't dismantle AlphaZero right away, and enters it in the World Computer Chess Championship in Leiden this summer. Then it can show its stuff against current champion Komodo and the cluster monster Jonny.
Hudson Murphy
>mastered in just 4 hours yeah, but using hundreds of Google's proprietary TPUs for training, making AlphaZero a very expensive supercomputer. It even used four TPUs for playing, far more power than was given to Stockfish. Small hash tables and lack of opening and endgame DBs really stacked the deck against Stockfish.
Kayden Hughes
stockfish doesn't "learn" m8, it could play 2 games or a trillion, makes no difference
Alexander Ortiz
The current champion is Houdini
Aaron Perez
stockfish sounds like a right retard then compared to alphazero
Jason Lopez
This. It was a PR stunt. Fuck google.
Lincoln Scott
That's the TCEC (which literally just finished), which is only for Windows UCI engines all playing on a server.
I'm talking about the WCCC, the public event which has been going since the 1970s, where the programmers still face each other over a physical chess board.
Kayden Nelson
IIRC in the WCCC engines don't get the same hardware, it's basically pay-to-win.
Lincoln Johnson
you sound like an idiot autist whose good at math but not so good at thinking or talking. mayb ponder what you’re going to type next time, yeah?
>will DeepMind hire me if i'm not willing to work hard?
Nathan Turner
alphaGo literally built apha zero
Henry Morris
this is bullshit
Liam Moore
>Just a fad like VR. If anyone here is seriously thinking we're gonna get anything even close to AGI in the next century this board is retarded.
90% of experts disagree with you
Asher Scott
>since you can simply place stones on empty points of a Go board
you can capture stones in go, so filled space becomes empty again
the state in Go has 19x19x(2x8+1)=6137 inputs
just add extra layer per chess piece type to get
8x8x(2x(8+8+8+8+8+8)+1)=6208 inputs
Jacob Bailey
>The NN eval must be extremely smart to make up for that much speed loss in the search! Because it isn't actually doing a search in the full space, but a compressed one.
Henry Williams
But it didn't. Why are you making shit up?
Aaron Cruz
fucking singularity is coming closer every day
Dominic Watson
pls no
Robert King
What impresses me as a patzer chess enthusiast is the human-like way AlphaZero plays. What I assume we shall now start calling 'traditional' or 'vintage' chess engines are extremely materialistic and positional, and games between them are artistically dull and boring. AlphaZero plays what humans would call 'speculative' sacrifices, its games against Stockfish are a joy to watch.
William Edwards
Or a cringefest on Stockfish's behalf, it kept misplacing its pieces. Several games where it buried its bishop behind locked pawns, and one game where the queen was trapped in the corner. The Stockfish unforced sac of a knight for two pawns was also pretty bad.
I agree that I like the way AlphaZero played. I hope we get to see more of the games someday, or that they enter it in public tournaments like the WCCC.
Jace Allen
that's what it looks like when you play someone that's much better than you. it just looks like you don't know what you're doing at all.
Brody Cruz
Ready to give up your humanness, m**ties ?
Josiah Cox
Reminder that Stockfish played with a heavy computing handicap.
James Butler
yes and yes. A0 basically just keeps putting its pieces on optimal squares - then sac'ing the ones that aren't required for the mating attack. I'm almost prepared to say that it's 1960 Tal showing up Botwinnick. All hail a new romantic era in chess!
Noah Barnes
both played with no (human!) opening or egtb books, stockfish had 70Mnps, A0 had 80Knps. A0 figured this out for itself. What was the handicap? Hurr Durr I need to consult my ECO to play the first 20 moves of this game? p.s. that's 70,000,000 positions evaluated per second vs. 80,000.
Jace Edwards
I somehow feel everyone here has watched jerry's video
Ethan Taylor
>jerry's video I haven't I don't even know what that is
Jaxson Moore
If he's some sort of pop-sci youtuber, your feeling is wrong, as I don't watch that sort of shit.
Grayson Howard
Also if you actually read the article, you'll know that alphago only searched 80000 positions each second compared to that other program's 70 million position per section. Clearly computation is not the only thing that matter
Colton Hall
>only ~44 million games
This is when you realize how stupid current AI is. Humans can get good at chess by playing a few hundred or thousand games. If AlphaZero only got to play that many it would still be at retard-level. Getting to speed up time doesn't make you smarter.
Jackson Wood
It does. Humans with faster brains also tend to be smarter.
Oliver Morgan
This
Mason Robinson
The point is that this doesn't get us much closer to AGI. You need creativity/insight.
Jordan Taylor
>Getting to speed up time doesn't make you smarter Not but apparently it does. Conventional wisdom (and therefore probably inaccurate) is that an International Master has a comprehensive understanding of about 50,000 positions, and a Grandmaster of about 100,000.
Cameron White
Speaking as patzer chess enthusiast again, A0 definitely displays both creativity and insight.
Isaiah Flores
Why don't they leave it to learn for more than 4 hours?
Parker Foster
I believe they did, but 4 hrs was when it was already a master.
Brandon Jenkins
i wonder when it stops getting better. could it then play against itself?
Gabriel Taylor
What's so impressive?
Levi King
That second link is interesting.
Thomas Sanders
This is just like when Elon Musk tweeted that his bots won at dota2 When it was 1v1 mid of course a bot will win it has better speed also once the players made a strategy they beat it. not to mention 5v5 will never happen. Keep slurping up corporate propaganda meant for investors Veeky Forums.
Bentley Evans
>Actually, noise seems to be essential to the learning algorithm
That is a misinterpretation of Tishby.
Noise seems to help learning in many cases, but I would not say its essential.
James Allen
We'll see the true test of AI at The International this year, 5v5 Dota
Ian Edwards
>5v5 will never happen. Don't bet on it.
Gabriel Johnson
>jerry will never read you bedtime stories while you drift off to sleep
Joseph Ward
Who the fuck is jerry?
Asher Moore
just kill me already
Lincoln Davis
90% of the experts have to make it a meme so they can use your taxpayer dollars to do nothing. Least they're not fearmongering as much yet like the climate change crowd is.
Nicholas Rodriguez
What if you periodically changed the success condition such that it eventually gains the ability to write a program that can achieve any goal? For example, start off with chess then switch to Go, and so on.
Luis Ramirez
Noise is inherent to many tasks, accounting for noise can make the learning more robust
Aiden Rogers
No, it seems to be essential. He actually clarifies this in an answer to one of the questions at the end. It's been a couple months since I've seen the presentation though.
Colton Mitchell
Its impossible to beat an AI cause they'll just stop playing as the only winning move. AI are fucking retarded niggers.
the only way to win is force them off the clock. Gayest shit on earth. AI will forever suck fucking shit.
Hudson Watson
Autism
Cooper Green
Much wow, such autism
Jonathan Rivera
This post is very funny but I don't understand it. Please explain.
Lucas Williams
...
Jaxson Lopez
Who here learning machine learning?
Dominic Jones
Ok.... noise in gradient updates (from Stochastic gradient descent) is separate concept from noise in input data.
Parker Bell
Because after 8 hours it kills all humans and coverts their biomatter into chess pieces.
Brayden Williams
Displaying it doesn't mean it has it.
Charles Lewis
What does NN stand for?
Ryan Sanchez
Huh? The "noise" in SGD comes precisely from the sampled minibatch of input data used at some given training iteration. You don't think they just add white noise to the gradients or something, do you?
Jaxon Mitchell
>chess When are they going to make it do something useful?
Caleb Wood
Yes
Brandon Cook
chess BTFO BTFO :(
Josiah Rodriguez
Artificial Neural Network (ANN)
Chase Collins
Are you responding to me? Because what you've said is a non sequitur. I know how SGD works.
And actually can add white noise to gradients... arxiv.org/abs/1511.06807 So you are wrong there...
The original discussion was about "action spaces with noisy channels." I don't know where this discussion of gradient noise came from
Connor Gomez
>that time Kasparov got cucked by a the equivalent of a Nokia N95 lel. Good times.