A Game Computers are Bad At

I war reading about Arimaa and how it was "designed so computers would be bad at it". In 2015 a computer beat the human champion.
Chess and other games can be broken down mathematically which means computers can play it rather well.
If you were going to design a strategy board game that a computer would be as good or worse than a human where would you start?
I'm thinking that the game would need to be centralized on probability of what your opponent will do where all moves have an even probability. The objective of winning would some how not effect the probability of what a move would be.
Thoughts on this?

Other urls found in this thread:

youtube.com/watch?v=92tn67YDXg0
blog.openai.com/dota-2/
blog.openai.com/more-on-dota-2/
theguardian.com/technology/2017/jan/30/libratus-poker-artificial-intelligence-professional-human-players-competition
science.sciencemag.org/content/358/6368/eaag2612
twitter.com/SFWRedditGifs

there are already plenty of games that computers are terrible at, see dota for example
i don't see why that pajeet created that bootleg chess if he wanted to make something that computers would fail at

>designed so computers would be bad at it
>is a simple board game
It's designed so everyone will be bad at it.
If you want something computers are bad at, take a look at narrative writing competitions and image-recognition.

>see dota for example
But computers beat the top humans at that piece of shit game.

the best way is to make it as convoluted as fuck, with as many highly varied game states as possible.

Make it with rules an autist wouldn't be happy with. Like roll a twelve and your cat judges the competitors in an impromptu dance contest for thirty-six points, then everybody has to take a shot.

Computers are really bad at taking shots of anything lower than 180 proof.

>"a computer is bad at"
This is a retarded concept that I wish would go away. The computer isn't good or bad at anything, it isn't doing anything of its own volition, it is following a set of instructions.
To that end, if you can "solve" (reduce the game down to every state) the game then a human can program a computer to complete the task successfully for all given states.

Now if it were a learning computer, in which no human programmed it but rather it programmed itself through information that it gathered then it might be engaged in a kind of cognition, at which point we could say whether it's doing a "good" job or a "bad" job. But that's almost never the sort of computer (or rather program) being talked about, so really it's a question of "did the programmer do a good job programming this software, yes or no".

Are there not drills that do a bad job at drilling holes? A computer is a tool like any other. And certainly, computers can be bad at doing certain things. Most personal computers are bad at pushing hyper-realistic graphics in real time. I don't see your point.

>But computers beat the top humans at that piece of shit game.
absolutely not

I'd make a game where you have to program stuff. For example, instead of the players making moves themselves, they would write little scripts that determine what kind of moves are to be performed under different conditions. Also the starting position of the game should be randomized so that opening books are rendered useless.

Computers wouldn't be good at this game since they are still incapable of programming anything non-trivial or coming up with their own algorithmic ideas. Also the search space would be absolutely enormous so brute force search wouldn't work either.

I think he's saying that it is not something inherent to the the computer that is making it "bad" at games. A poor drill could be because it's tip is loose, it doesn't spin fast, etc. These are all properties of the drill. A computer being bad at a game would be a problem with how the computer was programmed to play it, not with how the computer inherently "is". I think your example is true, but here I think we are considering computers as a whole, not a personal one. For instance "drills" as a tool are by far the best at their task. It's kind of like the legend of John Henry's hammer in the sense OP is considering.

That being said, I think there are things that humans could still beat any modern computer at. Any physical sport (obviously), image recognition, intuition or "wisdom" based decisions (maybe) as well as areas like image recognition, speech recognition, and other things that make humans distinct.

It seems you want to target machine learning applications. The trick with that is a game with a huge number of output choices, but they can't be reclassified under other simpler and less numerous decisions..

It's impossible theoretically. Any action in any game can be evaluated.
Games are meant for experience, not competition.

Mario Paint.

Nope

Cards Against Humanity/Whatever clone of it
Picking cards is a social behavior that humans would do much better at.

Poker

>Any action in any game can be evaluated.
That's technically true, but when you'd need to simulate a room full of people at a very complex level to "evaluate" a move, I think it counts as not being possible to evaluate right now.

That would make it easier for humans to make mistakes, and computers would eventually evolve to problem solve.

Actually yes.
youtube.com/watch?v=92tn67YDXg0

Games like hearthstone

Why not just have the computer play at Candyland or Chutes and Ladders? There’s no inherent skill in the game and it’s all random chance.

Except if you have ever played with “Rando Cardrisian” he can get quite a number of good cards based on random pulls.

>doesn't understand plural forms
>cherry picks a single instance
You do realize that there was a little documentary about this right? The AI got figured out then got stomped because it's just not as flexible as actual players.

>a game computers are "bad" at

Nursing and direct patient care.

>noncommunicative subjects
>meaningless numbers and routines that don't align with symptoms or orders
>countertherapeutic patient preferences
>noisy environment

I honestly cannot understand how nurses do fuck all with the shit they have to keep track of.

Actually any games that include unpredictable social situations. Basically RPGs (I know they don't have a winning objective, but still progress can be measured)

>The computer isn't good or bad at anything, it isn't doing anything of its own volition, it is following a set of instructions.
Well actually, the human brain is just a cellular automaton that moves between different states depending on external stimuli. Sufficiently advanced technology can be mistaken for magic, but in reality human chess player is just a highly complicated Turing machine that was programmed by an amateur.

Since the end goal of AI is to perfectly mimic a consious thinking mind there is theoretically no game that computers will always be bad at.

But if you want to make a game that CURRENT computers would be bad at, the thing modern AI is unable to do well is tying emotions to visual context.

So something like pictionary but with a lot of "draw the concept of evil" "draw the concept of love" "draw happiness" and similar concepts. Humans should have little problem doing that, but even the best AI today would likely shit itself.

>Implies that 1v1 mid sf no runes no neutrals is the entirety of dota and that the game isnt a 5v5 game with over 100 heroes to pick from.

While openAI did some impressive shit it is far from being able to play the entire game.

That's true, but it's all luck of the draw then.

>"draw the concept of evil" "draw the concept of love" "draw happiness" and similar concepts
>Humans should have little problem doing that
Fuck! Looks like I'm not a human.

got a link to the documentary? i know human players defeated it but i only saw twitch stream type videos

Are you retarded? All an AI would need to do is google whatever the keyword was.

>"draw the concept of evil" "draw the concept of love" "draw happiness" and similar concepts. Humans should have little problem doing that
lol what? a human might spit out some bullshit like wringing hands, a man and a woman holding hands with a heart symbol, and a smiley, but an AI could spit something out too

>wringing hands, a man and a woman holding hands with a heart symbol, and a smiley
that's pretty much what comes up on google image search for those things

I've been at the hospital many times and nurses seems like bad programmed computers.

Dunno exactly what he references to is but id assume its this:
blog.openai.com/dota-2/
blog.openai.com/more-on-dota-2/

AI is literally an over hyped meme right now

>image-recognition
Computers aren't bad at image recognition. That might even be the most successful application of machine learning to date, it works extremely well. I know Google Images for example is much better than I am at taking an image and finding out what it is / where it's from.

Settlers of Canan?

*Catan? Or Pirateer.
Get away from linearity.

but machine learning is ALWAYS the sort of program being talked about when it comes to AI

Computers can accomplish any task that involves patterns or instructions better than humans can.

>really it's a question of "did the programmer do a good job programming this software, yes or no".
It's definitely not that because programmers don't even know what ML programs learn.
e.g. I could write a program for a neural network that learns how to predict customer attrition for a company, and even after it successfully learns how to do that I wouldn't know what specific factors would cause it to label one customer as high risk for cancelling their contract this quarter vs. another customer at low risk.

yep
>muh AR
>muh google glass
>muh VR
>muh drones
>muh AI

What if you instruct a computer to go fuck itself?

programmers still have to program the neural network you moron

no

They're called fork bombs

i forked you're moms bombhole

They don't tell it what it learns. That's the point.
There's a distinction between explicit rules based programming vs. machine learning. You're retarded if you don't think that difference matters just because you still write something, that's like saying people don't really learn because our brain activity always follows from deterministic physical cause and effect relationships.
Again, the programmer himself won't even know what the program learns, so obviously this is different from an explicit rules based program.

the programmer still knows what is going on maybe they can't recite the exact numbers like 1.22 and 4.31 but they still know what the program is doing, there is a very specific method to it

kill yourself

No you idiot, I don't still know what's going on. That's the entire point.
>there is a very specific method to it
To learning, not to what it learns. What it learns is what you don't know and what you gain by having it learn in the first place.

it's like if you do a minimax function approximation in mathematica, you just enter the command, and you just care about the result, but if you care enough you can find out exactly the steps that go into doing a minimax approximation, it's the same with machine learning, it doesn't appear by magic, someone programmed it

kill yourself

it learns exactly what you tell it to, it can't think on its own, you "train" it with input and output and get an approximation of "AI" out of it

>it doesn't appear by magic
Neither does your own knowledge, idiot.
There's a difference between rules based programming and machine learning. Saying they're both the same because both involve programs is retarded.

>it can't think on its own, you "train" it with input and output
Tell me more about how you can learn without input.

you have this romanticized view of it because you don't really understand what is going on, to a real programmer it's about as interesting as e.g. a game engine, it has its uses but it's overhyped

that's not what i said retard

I've been working as a software developer for the same company for 8 years now and I've written a number of ANN applications for them. My view isn't "romanticized." Your view is just retarded.

nice argument

it's like if a kid makes a game in game maker, the kid doesn't know what the game engine does, but it's really nothing exciting, you're like that kid just that you're using tensorflow or whatever

You're the one who decided to abandon the argument and make it about stupid ad homs. I'm just pointing out why your stupid ad homs are wrong.
The argument's the same, there's a difference between rules based programming and ML. It's not a negligible difference. Your complaining about how it's still behaving in response to input says nothing because you can say the thing about human behavior.
>tensorflow
No, I don't use pre-built solutions. I write an actual class that handles creating the nodes, node layers, and connections and runs the gradient descent algorithm for adjusting the node and connection weights.
Maybe try learning how it works yourself so you stop having dumb opinions about it.

>designed so computers would be bad at it
only an idiot thinks something like this is possible

A game without rigid rules or objective functions.

Did you just make a happy merchant reference?
oooooy

Cards Against Humanity

>dota is 1v1 mid

Poker, it's about betting

A computer would suck at Minecraft

All these answers you guys give are great and all, but exactly what metric would you use for either "winning"? What you'll most likely end up getting is human judges deciding what they like the best. Cards against humanity the players are the judges in that instance.
All you'd be proving really is that humans are better at games involving understanding other humans. Likewise computers could play games meant to understand other computers fairly easily id imagine (although I am assuming, correct me if I am wrong).

If the goal would be a game with a clear and straightforward goal with no subjective interpretation of success/failure conditions, id guess KSP would be a good candidate, specifically the design part. Like say design a fly a spacecraft from the original planet to moon A, mine X amount of material from moon A, take off and land on the surface of moon B. Id imagine if will take a while before a machine can do that.

>only an idiot thinks something like this is possible
^This basically.
For everyone saying you'd design it to be complicated or not easily accounted for with instructions, that's exactly why machine learning is a thing, to have programs learn based on exposure to examples in cases where it would be too difficult to come up with all the instructions for how to do the task they're doing.
And it works really well. If people can do a given task and they've done it enough to where you have examples to work with, then it's pretty much inevitable you will end up with a program that learns how to do the same task with equal to or greater than the skill of the human task completing population.

that's part of why ML is overhyped. it's still a bleep bloop win/lose algorithm, not true intelligence.

Yeah but look, poker has probabilities and whatnot and if you play according to them that's like the 'safe' way of playing. Actual poker players know what's good and will bet a certain way just to fuck you up. I doubt a computer will be able to cope with that very will.

All you need is a reward signal not necessarily a win/draw/lose condition its just the easiest and most objective way to design a reward signal. For instance if you try to make an AI learn chess you might wanna give it a reward for taking pieces and a punishment for losing pieces. But the objective of chess isnt to take the pieces its to get checkmate. There is still someone that has to design/give-out the reward signal of course so i would mostly agree that it might not be true intelligence yet.

Bots easily dominated online poker back when that was popular / allowed in the US a decade ago, which is why they were banned everywhere if you got caught using one.
>Actual poker players know what's good and will bet a certain way just to fuck you up. I doubt a computer will be able to cope with that very will.
You are completely wrong and have it backwards. People who try to play by gut instinct get wrecked in the long term.
>I doubt a computer will be able to cope with that very will.
You don't understand ML much then. Even if you were right that "playing it safe" based on optimal moves is a bad strategy and playing based on gut instinct is a better strategy, programs could still very easily learn how to play similarly to the gut instinct players. You don't need to tell the program how to make its moves, you can just train it on massive datasets of past games and it'll learn to play similarly to however the best performing players behaved.
Programs are not in any way limited to just making the best / "rational" move each time in a game.

>it's still a bleep bloop win/lose algorithm
ML isn't one algorithm, it's a variety of different approaches and they aren't limited to game winning applications either.
>not true intelligence
That's such an obnoxious complaint.
Intelligence isn't some on/off switch. It's a collection of processes, many of which programs have already implemented.
If your idea of "intelligence" is some magic singular quality that either is or isn't present then that's your fault for not even beginning to try to sort out how it all works.
>Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'
General intelligence in a program won't happen because someone discovered how to add the "true intelligence" module to it. General intelligence isn't really a thing at all, it's a bad label for many different things getting lazily conceptualized as one skill. AI programs will continue to learn how to solve individual tasks into the future and they'll get more complicated / robust in doing things like conversing with customers in an IVR system. Some people (probably yourself included) will continue saying this "doesn't count" no matter how much applications like these advance, but hopefully this argument will begin to seem progressively more irrelevant as time goes by.

You don't think real poker players would beat a bot? Like in a tournament. Online poker is saturated with amateurs.

>Like in a tournament.
theguardian.com/technology/2017/jan/30/libratus-poker-artificial-intelligence-professional-human-players-competition
>An artificial intelligence called Libratus has beaten four of the world’s best poker players in a gruelling 20-day tournament that culminated late on Monday.
T>he Brains vs Artificial Intelligence competition saw four human players – Dong Kim, Jason Les, Jimmy Chou and Daniel McAulay – spend 11 hours each day stationed at computer screens in the Rivers Casino in Pittsburgh battling a piece of software at no-limit Texas Hold’em, a two-player unlimited form of poker. Libratus outmanoeuvred them all, winning more than $1.7m in chips. (Thankfully for the poker pros, they weren’t playing with real money)
>“We didn’t tell Libratus how to play poker. We gave it the rules of poker and said ‘learn on your own’,” said Brown. The bot started playing randomly but over the course of playing trillions of hands was able to refine its approach and arrive at a winning strategy.

>trillions of hands
Holy fuck
What if it just had an amount of experience comparable to professional poker players, thousands of hands (my estimate)?

If a human played trillions of hands of poker I'm sure the human would beat the computer no problem.

It wouldn't be as good then.
How people learn with less experience is better than what a program would learn with less experience, but as the program approaches a much larger amount of exposure to examples its strategy to learning becomes much better than the human strategy.

I don't think there's any evidence at all to support that expectation.
Human strategy has an advantage over machine learning if you limit both to small amounts of experience because it offers a shortcut to brute forcing, but at the scale of trillions of games that shortcut advantage would go from helpful to limiting.

How are you surprised that a computer can simulate poker games at speeds many multitudes faster than a human can play them? Alphazero became master in chess in 4hours but during that time span it played iirc 44 million games. Humans have evolved to become good at estimating with little experience so a human would with very high probability win however machines arnt as constrained in regards to speed as a human. However to make an AI that would be considered truly intelligent i would say that it needs to be able to learn more from less just like a human can. There are some methods for this though but the ones ive heard of makes an internal state transition model and then just runs the same algorithm on that model.

There are high efficiency learners already

science.sciencemag.org/content/358/6368/eaag2612

Everyone who doubts, mocks, or challenges the assertion of AI by using current metrics is a fucking idiot. They will be better than us at everything soon.

A game in which you have to win the Turing test over long periods of time. Waow, so hard.

Can degrasposting please become a thing?

if the computer is learning you could see what cards won the most often against what card, and then use these stats to win

>Turing test
What do you think CAPTCHA stands for?
See:

Clearly I meant the traditional definition of a Turing test, not an automated version that's designed to be as unobstructive as possible. Furthermore, I explicitly specified "over long periods of time", though I meant to filter out some laughable publications in which people communicated with "an Indian boy who doesn't know English" for two minutes.

>laughable
Yeah, like how Bobby Fischer laughed over how bad chess AI was. And today the best human Elo ratings are hopelessly below chess AI progran Elo ratings.

>not true intelligence
I wouldn't go that far, I just think it's naive to think we can realistically use ML to solve any kind of problem a human can solve, so yes it it over hyped a bit but still very useful.
Still think ill take 50 years to get a machine to solve problems like the one I mentioned.

The question was what game, or what archetype of games would computers be measurably bad at.
You're correct in pointing out the games suggested are games with characteristically human judgements, but fail to realize that's exactly the idea.
While you see it at cheating, games of human understanding are quantifiably harder for computers, not simply because the decisions can be incredibly complex or ostensibly erratic, but because they are non-binary. Take cards against humanity. Every round isn't a simple yes/no depending on a set of some conditions like sudoku or chess is. Even if you gave a computer extensive information on the history and inner workings of humor, it would still require information on the personality of the people you're playing with, and further, the computer must be able to pick up on the nuances of the situation, the mood, and generally, how each person is thinking/feeling so far.

These tasks are all made painfully simple with human characteristics like empathy, but how would you go about programming that? Could a computer ever understand a human? We can't even understand how a computer learns.

I wish i could have access to the full article. I thought it would take a few years more for some serious new methods for better data efficiency to come out. Would be interesting to see if it could be used on things other than solving captchas.

>These tasks are all made painfully simple with human characteristics like empathy
That's not true at all. You only think things like empathy, understanding, "common sense," etc. are simple because you haven't even begun dissecting what actually makes them work. They're incredibly complicated, we just aren't given subjective access to all the details because they're not relevant to evolutionary fitness, having access to the bottom line of "I feel X" is all we need to get by.

I didn't say it was cheating, I just specified how it's basically the subset of all games involving understanding humans, but that's it (from what they mentioned), and that it's a bit like computers (of similar sofware) understanding other computers. It's still a useful problem to solve but it's a limited subset. Now a computer/intelligence proficient at understanding any kind of manifestation of intelligence with similar problem solving abilities, at lease in terms of problem scale (hard to define I know) would be a truly impressive achievement.

I already agree with everything else you pointed out, maybe I didn't make my point clearly.

I don't think he meant that they ARE simple but that it's simple for us to use as humans, even if they're not simple to truly, fundamentally understand.

That isn't fully image recognition

this
when you're searching for the source and it doesn't have the image indexed by other means it might tell you something useless like "leg" or nothing at all

...

computers suck at everything as of present times (besides doing simple things really fast)
they need soooooo much help
if you give a computer the same constraints a human would have they'd fucked