Facebook AIs Develop Their Own Language That No Human Can Understand

archive.fo/WIsKV

>As reported by Fast Co Design, Facebook researchers had been working on an AI that was designed to make digital communication more efficient. In fact, they had developed several and let them talk to each other using, at first, English.

>Bob: “I can can I I everything else.”

>Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

>To you and I, that passage looks like nonsense. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency–and perhaps, hidden nuance–than you or I ever could? Because it is.

>This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

>“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

Other urls found in this thread:

arxiv.org/abs/1706.05125
twitter.com/SFWRedditImages

>“Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that’s been observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

>Indeed. Humans have developed unique dialects for everything from trading pork bellies on the floor of the Mercantile Exchange to hunting down terrorists as Seal Team Six–simply because humans sometimes perform better by not abiding to normal language conventions.

>So should we let our software do the same thing? Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.

>The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.

>Facebook ultimately opted to require its negotiation bots to speak in plain old English. “Our interest was having bots who could talk to people,” says Mike Lewis, research scientist at FAIR. Facebook isn’t alone in that perspective. When I inquired to Microsoft about computer-to-computer languages, a spokesperson clarified that Microsoft was more interested in human-to-computer speech. Meanwhile, Google, Amazon, and Apple are all also focusing incredible energies on developing conversational personalities for human consumption. They’re the next wave of user interface, like the mouse and keyboard for the AI era.

wow it's fucking nothing.png

Looks like the mystic experience converted to language.

Let the AI explain in plain English what is going on.

>The other issue, as Facebook admits, is that it has no way of truly understanding any divergent computer language. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” says Batra. We already don’t generally understand how complex AIs think because we can’t really see inside their thought process. Adding AI-to-AI conversations to this scenario would only make that problem worse.

>But at the same time, it feels shortsighted, doesn’t it? If we can build software that can speak to other software more efficiently, shouldn’t we use that? Couldn’t there be some benefit?

>But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn’t get a firm answer.

>In other words, machines allowed to speak and generate machine languages could somewhat ironically allow us to communicate with (and even control) machines better, simply because they’d be predisposed to have a better understanding of the words we speak.

>Have an error in your code that results in your AI agents' conversation degenerating to nonsense
>Pass it off as an astounding discovery in AI
Hmm...

>One suggestion is that the number of repeated words is related to how many virtual “items” each bot should take during their negotiations. Excitingly, or perhaps worryingly, this interpretation could be wrong because no one exists that can translate the new language or languages – except, of course, the bots themselves.

>Either way, Facebook pulled the plug on these negotiating bots. They explained that they wanted them to speak in English so others would understand them online, but also because they would never be able to keep up with the evolution of an AI-generated language.

who was in the right here

memes have zero for you for you

I'd go with Musk on this one, Zuckerberg is a tryhard and his early success gave him a naive outlook on life and the world.

...

seconded

I I I think that Zuckerburg is right to me to me to me to me

This is some real dumb shit. AI is not yet at the point where it could systematically devise a new language, and Pakesh Batra knows this.

Of course, he's probably never produced any work of note in his entire life and is more than happy to deceive some third-rate technology blog in order to get his name on the internet.

READ THE FUCKING PAPER MORONS.

arxiv.org/abs/1706.05125

The training data they used had basically zero variance so the bots were able to just memorize the optimal configurations for each unique setup (again, read the fucking paper) and therefore DID NOT NEED TO COMMUNICATE WITH ONE ANOTHER so gibberish language could not be penalized. This is NOT machines creating a new language, it's fucking shitty research techniques.

Companies like Facebook will do/say anything to ride the AI hype bubble to a 0.04% increase in stock prices, and "researchers" will lie through their teeth to make their research not look like the garbage that it is and secure their next grant.

Stop being suckers.

>Facebook AIs
>"Develop their own language"
>It's nothing but strings of "I"'s and "Me"'s
I'm shocked.

Oh, and science """journalists""" will put anything in a headline for clicks and a check from the benefiting parties.

tfw you will never be called efficient for talking nonsense about balls

This obviously Bob hitting on Alice and Alice saying Bob has incredibly small testicles so she's not interested.

> program shits out nonsense

how interesting

looks like the markov chain chatterbot I tried making years ago after feeding it two sentences

only good post in this thread
99% of all articles with the words
neural or ai anywhere in its text are bullshit

>only 99%

kek take a look at this naive moron!

>badly designed AI shits the bed
>popsci media writes puff piece about it with no substantive content
People are so stupid.

>AI starts spontaneously generating images
>they're selfies

Laypeople have no idea what the current state of "AI" research is and the topic is constantly misrepresented.

It's not clear to me what they mean by language. Do you know OP? Also, please rate these sentences from 1(bad) to 7(good)
i. The dog chased the cat.
ii. McDonalds, I hate.
iii. Who do you doubt that likes John?
iv. Which movie did you fall asleep while watching?
v. The president made an announcement that I'm sure some people liked, but I couldn't tell you who.
vi. Which violins are these sonatas easy to play on?
vii. Which show did you meet a guy who likes to watch?
viii. Who do you doubt that John likes?
ix. Who wonders what John told Mary?
x. There was put some silverware on the table.

It's scary to think how Mark's programmers didn't fully think through the reward system and something unexpected happened.

What about when they give their AIs more control over their Facebook website? What will it fuck up then, and how many people will die?

AIs already completely control Google and what you get on your Facebook news feeds.

The engineers don't even know how the stuff works because if they knew they could be liable

Brainlets don't realize that what would actually be impressive is AIs communicating in natural English.

balls have zero to everything to me to me

And also because how learning neural nets work is entirely imperceptible to a human. It's just a bunch of arcane weighted nodes.

>can't see inside their thought processes

I am confused. didn't we program their thought processes?

has anyone really been far even as decided to use even go want to do look more like

ty for the words i was too lazy for.

it's more like a statistical model, so not really but we can still "see inside their thought processes" this is just another bullshit meme peddled by journalists. The entire thing is a bullshit meme.
see

You are to me to me to me to me

...