AlphaGo v Ke Jie Final Round

events.google.com/alphago2017/

Match 3 is about to start. Ke Jie, the world #1 player is currently 0-2 for the set. This may be the last chance humanity has to best an AI before it is completely out of reach. It may already be too late. Get the fuck in here.

Other urls found in this thread:

youtube.com/watch?v=6rB2cYOeppQ
online-go.com/group/968
wired.co.uk/article/elon-musk-neuralink
twitter.com/NSFWRedditGif

I, for one, welcome our new machine overlords.

Do you think they'll have mercy on us if we cheer for Alphago?

Stephanie a cute

Ke Jie almost won game 2.

No way to know.
He resigned, and in the first game alpha go won by 0.5

Ke Jie looks like he's in agony. He has my sympathy.

>its an ugly chinks get to be announcers because muh soggy knees episode

bring back mike you faggots!

AlphaGo won by 0.5 points by design.

Ke Jie was ahead for the first 50 or so moves. All the experts said Ke Jie had a favourable position.

Then he made a mistake and it all went downhill.

>AlphaGo won by 0.5 points by design.
That was my point, because of the design of alpha go, it's hard to tell how much close Ke Jie was from winning.

>Ke Jie was ahead for the first 50 or so moves. All the experts said Ke Jie had a favourable position.

The AlphaGo team said that Ke Jie was very good in the first 50 moves, but that said, clearly the opinion of experts in go already doesn't matter so much because they don't think like alpha go and are not at his level.

>clearly the opinion of experts in go already doesn't matter so much because they don't think like alpha go and are not at his level.
It's not a god. The experts are fallible and the machine is fallible.

>the machine is fallible
only if you built it imperfect

Yeah, but as the experts are fallible, and the machine can't explain properly what is was thinking in human terms, as I said, there's no way to know how actually near Ke Jie was from winning the second game.

>this is the guy that shit-talked lee sedol
>he's getting btfo by alphago even harder than lee did

>alphago is the same as it was like a year ago

go back to weddit, falsefagger

Also, he's not actually getting crushed any harder than Lee Sedol did, since he lost his first three games too. Sadly, Ke Jie doesn't have the opportunity to win a fourth game like Lee Sedol did.

>liking the professional announcer over retarded ugly amateur makes you a false flagger

dont ever respond to my posts again you cuck faggot

Ke Jie resigned.

youtube.com/watch?v=6rB2cYOeppQ

Again? Motherfucker. At least Lee played it out

But Ke Jie lost also the three games in the master series against a weaker version Alpha Go.

So Ke Jie in total lost 6 of 6 games, even if the first 3 were not from the challenge.

Lee lost 4 from 5.

The only commentator that wasn't at least a retired pro was Andrew Jackson. Hajin is also a known and well-liked member of the Western Go community since she's the highest level player that regularly releases English Youtube content.

Normal humans have difficulty separating the likelihood of winning from the margin of victory. It usually makes sense that if you have a bigger advantage, you're more likely to win.

A salient characteristic of AlphaGo is the ability to separate the two. AlphaGo easily trades the strength of an advantage away to maximize the chance of winning overall. Small margins of victory are to be expected, and don't really tell us anything.

Rip Ke Jie, talked shit got hit.

Also join this group, online-go.com/group/968

Bump for potential Go/Baduk/Weiqi interest

Lee has saved himself from internet obliteration by placing one stone AlphaGo couldn't understand

also lee has a name that you can pronounce

chinky boy a shit

I find it somewhat amusing that just a scant number of years ago, people I talked with about expert systems or AI or whatever you want to call them, were going "nooooo no no in foreseeable future AI simply can NOT win against a human in go because it is SO MUCH MORE COMPLEX than chess and requires going through SO MANY POTENTIAL MOVES."

Although I mentioned how computers are constantly becoming more powerful at rapid pace, how technological development has been surprisingly fast before, and how they are underestimating the humans that develop these systems, advocating a possibility that it could happen sooner rather than later, they wouldn't listen.

Well I gave one of them my told-you-fucking-sos recently, but he was dismissive and claiming he didn't care. Hah.

(I've also observed that their attitudes directly correlate to their level of education, or lack thereof.)

Because they were probably thinking in terms of hardware scaling, not software improvements removing the need to scale; which is an understandable prediction since pre-ML AI from the expert systems era was brittle and agonizingly unscalable.

If you were saying "but muh Moore's Law" they had the better model of reality and you happened to be correct through luck, not knowledge.

this

hardware is pretty fucking stagnate. although its good i dont have to upgrade my pc every year just to play games

alphago just uses a different way to play the game than old AIs. it doesnt brute force the best possible play

that also means its sort of shittier in a way than the old way. you cant just explain the way a game is played and have alpha like ais dominate. they are basically just copying what has won other players the game which means it takes a long ass time and lots of data for it to "learn" how to play

>that also means its sort of shittier in a way than the old way. you cant just explain the way a game is played and have alpha like ais dominate. they are basically just copying what has won other players the game which means it takes a long ass time and lots of data for it to "learn" how to play
Well part of the reason DeepMind did it this way is because it applies to real life situations better. That in real life there is no ideal perfect answer, you had to make a decision based on imperfect info. Go is still far away from real life but it proved that learning like a human can work.

I think we're starting to plateau on the hardware department.

Not surprising. Computers will always beat humans in games with repeatable methods.

If we raised a human to do nothing but play Go, we didn't teach them anything unrelated to playing Go, didn't let them engage in any activity except playing Go, didn't allow them to view or see anything except playing Go, and made it so that their very survival depended on playing Go perfectly, could they possibly fight this machine?

>If we raised a human to do nothing but play Go, we didn't teach them anything unrelated to playing Go, didn't let them engage in any activity except playing Go, didn't allow them to view or see anything except playing Go, and made it so that their very survival depended on playing Go perfectly, could they possibly fight this machine?
No, because the child would be way past his mental prime if not near death, and he still wouldn't have gained enough experience to match what Alphago had learned.

The issue is that human learning speed is capped. you need to rest, you need to eat, and there is only a few decades when your mind is at its most flexible.

Alphago is just beating humans at learning. Not memorisation, learning. Alphago teach itself which go positions are the most likely to be good, and search them first. And it learns which positions are good via its own playing history. And you can never play more matches in your life than what Alphago already had done.

That depends roughly on the quality of the experience, actually. Not all experiences are equally valuable and teachable. For example, a number of students can undergo the same lesson at the same developmental stage with the same level of general intelligence, but only some will understand the concept and even fewer will be able to understand the fundamental behind the concept to a point where they are able to extrapolate it to a seemingly unrelated situation.

The question is not how many games Alphago has played, but what is the quality of this experience.

Not quite. Human learning is about abstraction and adaptation. Alphago's learning is a poor comparison to actual human learning, because Alphago is essentially an immortal, unresting, idiot savant whose sole purpose in life is to play go.

You can't ask Alphago to take a situation in go and apply it to a non-go situation. It's impossible, because Alphago's knowledge is not abstracted and adaptable (also because our ability to program is not yet nearly at that level). Forrest Gump's famous line "Life is like a box of chocolates" requires an intellectual underpinning that is so complex that it can't be modeled in a physical medium.

>Alphago is essentially an immortal, unresting, idiot savant whose sole purpose in life is to play go.
No, that would imply there are other things that Alphago is ignoring in order to play go. Alphago is an intellect made of Go and exists in a world where there is only Go. A cosmos of infinite, unending games of Go where there is no other possible action but to continue playing and perfecting Go. There is no human equivalent to this experience. It is impossible to say if a human who existed in this Go Universe could compete with Alphago, because even if we could do that, would that creature even be human?

So now that we're clearly getting good at creating go savants, what game should science tackle next? Calvinball?

Or maybe they should program a president and have it run for the office? An expert system might be preferable to all the present madness.

>So now that we're clearly getting good at creating go savants, what game should science tackle next? Calvinball?
Well they suggested that are looking at RTS. But the goal eventually is to use it for medial diagnosis, reading charts and scans for example. But Alphago was intended as a stepping stone towards custom AI for targeted applications.

For Google, it is a rather big deal that they combined two well known AI setups and gotten a better result than the sum of its parts. And as mentioned by others it allowed Alphago to reach its goals without needing to do the impossible task of brute forcing a perfect game.

>Well they suggested that are looking at RTS.
RTS is pretty mechanical, should be easy.

>RTS is pretty mechanical, should be easy.
The goal is to beat a human player. Alphago will have to play with input speeds that match what humans can do, and it could only collect data from visibly observing the computer screen.

Anyway, this isn't official yet. Who knows what their current plans are.

>it could only collect data from visibly observing the computer screen.
This would be, without a doubt, the hardest part. This requires the ability to create meaningful context.

While there are a lot of strategies to playing RTS, they're very vague methods that require much quick thinking and improvising. While this AI is not a brute force program trying to play a perfect game, there are an almost countless number of methods to victory that seem equally viable when you're playing without the full scope of the action.

bump

No worries! We're all going to have cyborg brains soon anyway.

wired.co.uk/article/elon-musk-neuralink

Better preorder that Encephalon now, chummer.

bump