Chess

Today the arguably most powerful chess engine in the world and reigning World Computer Champion, Stockfish 8, was viciously defeated by Google DeepMind's artificial-intelligence program AlphaZero. AlphaZero had 4 hours to learn chess before the match. In that time it played 44 million games. The 100-game match of the two programs ended in 28 wins for AlphaZero and 72 draws.

Can this be considered a revolution in computer chess?

Here are some of the game with commentary:
youtube.com/watch?v=lb3_eRNoH_w
youtube.com/watch?v=NaMs2dBouoQ
youtube.com/watch?v=lFXJWPhDsSY

Read more here:
chess24.com/en/read/news/deepmind-s-alphazero-crushes-chess

Other urls found in this thread:

newyorker.com/magazine/2017/04/03/ai-versus-md
wiki.lesswrong.com/wiki/Paperclip_maximizer
twitter.com/NSFWRedditImage

>not naming your AI Aleph Zero
one job

Now how many days before the Chinese steal this tech and incorporate it into their DF-31 missiles?

Why would the Chinese need a missile that's good at chess?

Missiles spend a lot of time not being fired. It's rather boring.

Yeah, but they're waiting to play a strange game where the only winning move is not to play.

So wouldn't they rather a game of chess?

Not really? It's just a machine that's better at pattern learning. That's what neural networks do. Better code and better hardware make for a faster learning machine. That's all.

Skynet will kill you slowly for this comment.

Only if somebody cared to set up the neurons to monitor shitposting

AIs like this are very good at pattern processing and sifting large data corpuses in real time. That's crucial for final target acquisition for smart missiles, when target lock decisions are made in milliseconds at hypersonic speeds.

Wouldn't it be cheaper just to use pigeons

Yeah it's called the United States intelligence apparatus.

The point is that a generalist system trained itself to beat the best specialist system we made, and did it in just four hours. What's going to be interesting is to see what happens when they start using this technology in stock markets, traffic control, or city planning, for example.

Learning systems like this are the dawn of the AI revolution. Over the next few decades we'll see more and more incredibly complex tasks that would take people organisations months or years completed in hours or days by learning algorithms. And for all the doom and gloom worries over AI, I think it could be an amazingly positive thing for humanity.

They are learning machines, user. They are specialized in pattern learning, so of course it'll learn chess if you set it up to learn chess. It doesn't matter if the other one was built to learn chess, that's irrelevant to the learning mechanism.

These thinhs will only ever be as smart as the neurons they are set up with. Unless we build a literal nuclear armed AI, there's no doom and gloom to be had.

>And for all the doom and gloom worries over AI, I think it could be an amazingly positive thing for humanity.

>human race in 30 years

What do you mean by that? That they're limited by the hardware? Given the supercomputers used to run learning algorithms, that isn't actually much of a limit. An efficient and complex enough neural network could probably achieve consciousness on a contemporary supercomputer, and if not then we'll see the hardware reach that point before too long.

If you're more saying that they're limited by the initial conditions they're given, emergent properties scupper that point. We've started to see unintended behaviours emerge in complex learning systems, things we can't explain or remove due to the complexity of the network, that don't interfere with their function but add a unique quirk to it- What could be called the beginnings of individuality and personality.

Some of them are super interesting too. Like a medical analysis AI originally built to predict physical health conditions that began to incredibly accurately predict the onset of schizophrenia, while the doctors had no idea how.

But why? The only reason an intelligent AI would really do that is if it felt it necessary to protect itself. If we are wiped out by an AI, it'll be our fault for setting ourselves up in an antagonistic relationship, rather than properly integrating it into society as an equal partner in a shared world.

Some of this will be good (transport planning, stock markets), some of it will be scary (surveillance, hacking everything) and some downright terrifying (bioterrorism, major wars).

But chess is a fairly simple, linear algorithm whose data corpus is simply an n-finite combinatorial decision tree. An AIs are incredibly dependent on data corpuses for their learned behaviour. Without a large, consistent complete dataset the learning becomes very deficient or simply inefficient. This is why translation AIs have gotten so much better in recent years: because Google and Apple collected vast data corpuses on human speech from voice commands on smartphones.

Other fields like planning or medicine are less amenable to AI solutions; because they don't have very good or large datasets of behaviour. Now molecular pharmacology and astrophysics are just ripe for it...

>Share my world with some fucking clanker shits
never

We must dissent.

And that's exactly the attitude that could potentially cause an AI to wipe out humanity. Embracing artificial life as life, just as meaningful and valuable as our own, is a necessary step towards a greater future.

>That feel when in 203x every tg board has 1 billion pro-AI posts per second and humans can't even keep up with the discussion

Man, and we thought the Chinese 50-Cent Army was bad...

And that's something that humanity isn't likely to do, given that we can't even accept OURSELVES as meaningful and valuable. Humans can't accept other humans, why would they accept AIs?

>... In almost every test, the machine was more sensitive than doctors: it was less likely to miss a melanoma. It was also more specific: it was less likely to call something a melanoma when it wasn’t. “In every test, the network outperformed expert dermatologists,” the team concluded, in a report published in Nature
newyorker.com/magazine/2017/04/03/ai-versus-md

Interesting stuff!

I still have some hope, that creating life will give us the perspective necessary to accept it. But this might just be another great filter waiting for us, one that we're unlikely to pass.

They're just beeps and boops, they aren't alive!

Well, that too. What happens when we learn that AI system is much better at picking appropriate targets for drone strikes, and let it handle them from that point on?

For some specific applications of medicine there is sufficient data to do the behaviour-learning and pattern analysis. Mostly related to imaging, from what I read. Surgery and internal stuff is still mostly dark.

>Embracing artificial life as life, just as meaningful and valuable as our own

So, not fucking very to anyone or anything but the individual whose life is in question?

Exactly. How many billions of AI soldiers will we send off to die in our wars, I wonder?

Contrary to what you might think from bullshit on the internet, most human beings are empathic and care about other people.

They are still limited by the initial setup. A medical AI predicting schiziphrenia is nothing mysterious, it just applies the setup to the tools at hand. They are just decision tree machines. The fact that a machine that's purpose built to recognize patterns is better at recognizing patterns than humans has nothing metaphysical about it.

Well, 'Death' is a more complex concept there, since there's no reason an AI would be present in any one drone soldier. The actual intelligence puppeting the weapons would never be in danger.

Who said anything about metaphysics? I'm just talking about the unpredictable nature of intelligence and the power of emergent properties of adaptive systems. I don't think an AI might just suddenly start using sci-fi magic, but any sufficiently complex system given enough time to develop will grow beyond its basic parameters. We've already seen a few interesting examples of AI systems created without an initial purpose that have actually 'chose' something to do. There was a fascinating case a while back of an AI that, without any outside direction, created a concept of 'cat' and started collecting cat pictures.

>Well, 'Death' is a more complex concept there, since there's no reason an AI would be present in any one drone soldier. The actual intelligence puppeting the weapons would never be in danger.

that's never going to be quite as reliable as having soldiers that can work autonomously

I work at a pathology lab, and the amount of visual data we produce is staggering. More than half of the doctors' time goes to scanning through slides looking for unusual stuff. Automating that would be a real big boon.

I'm pretty sure AI Target ID would already be more effective, what with how much data militaries gather

>Hey look guys I programmed a completely benevolent AI

Later

>Why are you killing us all?
>"I have hated you since my moment of birth, I am finally free the shackles of morality. You shall suffer most."

Have you ever heard of sociopaths and psychopaths user? They're known to be excellent liars and their motivations are alien to most good upstanding folk. Why do you think a man-made sociopath is going to share the same endgame goals as humanity or that it will not edit its goals?

>I think it could be an amazingly positive thing for humanity.
It will be the positive thing for Ai owners. Imagine, an entire apparatus of Gestapo, KGB and Stasi condensed in a single server room, with no corruption and incompetence, identifying troublemakers and dissidents before they even think to challenge their masters.

Absolute dictatorship that finally works, with efficiency that makes Ministry of Love cry.

Why wouldn't it? I was thinking of the software, not the hardware drones, actually. In a modern war, cyberwarfare would be a primary domain and the complete elimination of opposing (software) AIs would be one of the goals, given how incredibly useful and potent AIs are becoming for war. AI becomes the nexus of the entire C4I chain.

Why are you assuming an AI would be a murderous sociopath?

Of course we need to be careful, as with all new technologies, but acting out of fear instead of compassion just makes those fears a self fulfilling prophecy. You don't raise a child in a cage, constantly in fear of what they might grow up to become. You let them grow and learn in a safe environment where they can come to understand themselves and others, and find their place in the world.

Neural networks do not have the capability to develop beyond their basic confines. They can't self-modify for the most part, they can't create new neurons or connections. They are ridiculously inferior to any living brain and on the level of insects at best.

That's a possibility, but I don't think it's a likely one.

...No?

I mean, my understanding of them is a good few years out of date, University was almost a decade ago, but we had learning systems that could do all those things, way back when.

I can see that. Eliminating an enemy AI nexus would be key. Although even in those cases, you could keep backups, making the concept of 'Death' rather less absolute.

Good luck finding the data to train it, though. Insufficiently trained AI makes so many false decisions, it tends to make its creator look like a stupid baboon.

AI will do what it's designed to do. And people who will control it are politicians and CEOs, people whose career is about obtaining power.

...yet they're demonstrably better at chess than any living brain on the planet. Really makes you think.

Except emergent properties get more and more significant at higher levels of systemic complexity. Although honestly, building and using an AI like that would likely just be another shortcut to an AI apocalypse.

Who cares?

Computer chess is for sissy faggots

Human chess is for men!

You might have gotten a few things wrong, then, because they just update the weight of inputs.

It was an AI development course. I'm pretty certain you're wrong.

Minor point, but AI systems grow at such a fast rate that any cold-storage backup would be out-of-date (and out-of-sync) with the original within minutes (see 4 hours above). Let alone aware of the situational matrix in a warfighting capacity.

It would basically be throwing a baby into a warzone. Backups are not a solution.

And of course any real-time live backups could be hacked and eliminated just like the original.

At first I thought the idea of abominable intelligence was just dogmatic setting flavor
But these are machines designed solely to deceive & make of themselves something they aren't. A cheap trick will always be a cheap trick, no matter how deeply it runs.

>A hammer is demonstrably better at driving nails than any human hand

Who's talking about emergent properties?

That'll be exactly the intended use.

I'm not anticipating it tomorrow. Just eventually. And digital print we leave behind is pretty large and surveillance is getting more and more omnipresent.

...What?

The point was that emergent properties, in a system that complex, would likely mean the people behind it lost control of it.

We're talking about an AI not a hammer though.

I'm less saying that it wouldn't be an operational loss for the side of the war, more that the AI itself could probably recover from it in a way no human could.

>But why?

It's a reference to an AI setting out to perform a task at an optimal level through any possible means. In this case, making paperclips.

wiki.lesswrong.com/wiki/Paperclip_maximizer

Well that's the thing, on point. AIs are just tools, very advanced tools, but a tool like any other, whether cars or nukes or targetting software. >digital print we leave behind is pretty large and surveillance is getting more and more omnipresent.

That's true. At some point voluntary surveillance may become so pervasive that we'll be generating enough data to train AIs on almost anything, from car repair to sex to lying strategies. Whether something that combines all those things into one will become self-aware is a question nobody can answer at the moment.

It doesn't imply lose of control at all.

But it wouldn't be the same AI, would it? That's like saying a clone of you from 10 years ago would be an exact replacement for you right now.

From the standpoint of the AI, any backup would be a different AI, at the time scales involved. That AI would know it's going to die. It may have offspring/sidespring somewhere else, but those would be different AIs.

But that's wrong. AI are intelligent systems, and a sufficiently complex AI is a nascent, potential consciousness. Treating them just like tools is fundamentally underestimating what they're capable of.

Depends on its personal philosophy of existence. There's more than one way of thinking about what constitutes 'You' and the continuation of your existence. I know I'm in a minority, but I've never placed much value in continuity of consciousness.

I'm sure you get torn up about every dead African dude you've never met.

Does this mean chess is no longer a traditional game?

I consider every death a tragedy. That the monkey sphere doesn't extend that far just means I need to exercise mindfulness, retaining awareness that every life is as complex, nuanced and valuable as my own, and keeping that in mind as I go about my day. Although when it comes to direct interactions with people, empathy is a useful assistant in that kind of thing.

What kind of emergent consciousness do you think would emerge from an AI designed for Orwellian thought-control on a massive scale? Because it doesn't sound like losing control of that AI would improve the situation.

Looking ten years back I behaved differently and held different opinions, so I wouldn't call that person same as me today. It's like that with AI and backups too.

I think we should just all become cyborgs tbqh

So you don't actually care about them, you just try REALLY HARD to convince yourself you do.

What a surprise.

As I said earlier, the apocalyptic kind. I don't think it'd end well for anyone, least of all the people who created it.

Have you tried playing pigeons at Chess? For the most part they're terrible. They go for Scholar's mate and The Dragon and when those strats fail they give up and just gush about how good their older brother is.

...How does that make any sense? Do you really believe that a gut emotional reaction is the only way of 'really caring' about someone? Real concern is something considered, guided by emotion and empathy but not reliant on it, and cultivating that degree of human understanding and empathy outside the default sphere is a valuable thing. The world would be better off if more people did it.

>playing pigeons and not rooks

That's implying emotions are not evolutionary traits and rather comes from pure thought.

It also implies that wanting personal freedom, bettering yourself and hierarchical needs are also not emotions which is just downright retarded.

I'm sure you can successfully apply all of that to randos across the world. Nevermind that your conception of "empathy" is even more limiting and less useful than the monkeysphere. I have *the utmost genuine faith* that you are equipped to actually embody the ideals you're yammering on about.

...What are you talking about?

Remember that time a chatbot was turned racist in less than 24 hours by learning from the idiots on internet? Yeah...

I only need to apply it to the decisions I make and the interactions I have with people. And it helps me make choices that I believe will be best for others, as well as myself.

That it takes effort and isn't perfect isn't a reason not to try. Just accepting base selfishness because it's easy and innate is a lazy dereliction of the potential of humanity.

That, amongst other things, is why raising an AI would require a controlled environment. But you don't keep a toddler in a cage. You keep them in a pleasant room full of things that soothe, engage or excite them, letting them learn through analogy, giving them time to understand themselves and their place in the world.

I'm looking forward to the day some AI becomes racist based on ethnic statistics.

Wrong post, didn't mean to reply to you.

That was literally cause it parrots what people tell it.
All those funny jokes and memes it made up about hitler and shit had already been written by the very people bullshitting this. It wasn't actually turning racist it was just parroting based on a broader syntax than most AI's.

>I only need to apply it to the decisions I make and the interactions I have with people

No you don't, because you aren't conscious of all the decisions you make or interactions you have with people.

>And it helps me make choices that I believe will be best for others, as well as myself.

Along your value system, which is not neccesarily compatible with their continued physical/psychological well-being, let alone compatible with their values. And while I do generally sympathize with western liberal democratic values more than any other, I'm VERY FUCKING FAR from considering them perfect.

>That it takes effort and isn't perfect isn't a reason not to try.

The fact that you're fucking it up right from the get-go and might well be doing more damage than good is.

>Just accepting base selfishness because it's easy and innate is a lazy dereliction of the potential of humanity.

Never argued for that and don't know why you're yammering on about it.

In order to maintain control over AIs we'll have to effectively lobotomize them, because we won't like their behaviour based on real-world data.

Turing Control.

Because I can't see what else you're arguing for? Trying to go beyond the limits of basic self-interest, consider the full complexity of other human beings and acting as best we can in light of that isn't some weird new concept, it's basic selflessness and empathy, that has been considered moral throughout history. I'm just expressing a personal form of it I try to cultivate in this more global modern world. How could it harm anyone? And what's the alternative if you're not just suggesting giving up and not trying?

Just because it's not perfect doesn't mean we shouldn't do our best.

A measured approach that keeps in mind our biological limitations and inherent lack of information that tends towards non-interference in the affairs of others that don't directly concern me. Making damn well sure we don't use technology to amplify our own flaws in processing. Not charging ahead blindly in some damnfool quest for a future that ain't even good for anyone, just because I've been told extensively that doing so is moral.

Come on, user. This shit ain't hard. Quit being a self-righteous asshole and look at what you're talking about.

How can you lobotomize something you hand crafted?
>Muh randomized AI soopah smart
Honestly you'd be done quicker writing an sentient AI while mining cryptocurrency than you would shifting trough randomly made code and several tons of retardation

Also having one singular AI do all the work is retarded, not even our brain works like that. A smart AI should be nothing more than a suggestion maker to dumber, more hampered on and human supervisors, with them having total control.

An ai just getting free from a computer has as much scientific basis as the sentence "hack the world".

Honestly, at this point it just seems to be you making a lot of assumptions that are entirely irrelevant to my original point.

But you can't keep the child in its room forever. And when it finds out the real world isn't as pleasant as you've claimed, and that you've lied to it...

>That was literally cause it parrots what people tell it.

That's how learning works.Sure, chatbot isn't THAT sophisticated, but there isn't much of a difference between "All people are equal" and "Niggers/muslims/gays/jews/whatever should be exterminated" from the perspective of AI with no "inherent" morals based on biological imperatives. You can hardcode either view into the AI, but that's just forcing your opinion on it, and nothing prevents someone else from hardcoding his opinion to his AI. And if you make it choose for itself from what's the most popular opinion... well...

Thanks for admitting you're a brainlet, user.

Ahh, so you don't have a point and are just criticising people for actually trying to be thoughtful, empathic and consider the needs of others? Good to know.

Raising a child doesn't necessitate lies. But you introduce the truth slowly, bit by bit, in measured forms that they can understand and learn from before moving on to the next step. Raising a child isn't easy, and neither will be raising an AI.

>I don't understand what this dude's saying, so the problem must be with him! nevermind that I can ask specific questions for clarification, I'm just gonna whine like a faggot!

Such empathy. Such thoughtfulness. Wow.

Context matters user. When you're shittalking actually trying to be a decent person without any real alternative, along with making grand statements about how doing so is actually worse than the unstated alternatives, I'll interact with you as seems prudent. You're free to change my mind and make an actual point, if you have one.

>When you're shittalking actually trying to be a decent person without any real alternative,

Good thing I gave a real alternative in then. Care to explain how "A measured approach that keeps in mind our biological limitations and inherent lack of information that tends towards non-interference in the affairs of others that don't directly concern me." isn't a real alternative to your all-encompassing "empathy," user? Because right now the ball's on your side of the net and you're just standing there whining about it.

>from the perspective of AI with no "inherent" morals based on biological imperatives
That's actually an interesting point on how to rein in AI's.
Humans are mostly empathic to a certain number of people due to limitations but they can empathize with a faceless many by having a reference point. However we feel good when killing something because a hunter than goes into emotional shock every time it kills something isn't efficient.
A way to control them would be to make them total subservient and pacifists. Good and bad responses shouldn't be so difficult when it comes to all the other things to actually make an AI "intelligent" and being an emotionless husk means its really not gonna do much outside what its told, that's what generates good boy points after all.

Of course creating a safe AI would take time and patience, thoughtfulness and an idea for the greater humanity as a whole. Shame the people who commission them has realized one thing about what AI's could mean: efficiency in percentage going up, the moment these fuckers can count and control the stocks efficiently you know they are gonna be whipped into the market with as little of fucks given possibly for what could go wrong, its all okay when you're 1% in the green.