Machine Translation reaches Human Levels Ahead of predictions

blogs.microsoft.com/ai/machine-translation-news-test-set-human-parity/

Researchers in the company’s Asia and U.S. labs said that their system achieved human parity on a commonly used test set of news stories, called newstest2017, which was developed by a group of industry and academic partners and released at a research conference called WMT17 last fall. To ensure the results were both accurate and on par with what people would have done, the team hired external bilingual human evaluators, who compared Microsoft’s results to two independently produced human reference translations.

Xuedong Huang, a technical fellow in charge of Microsoft’s speech, natural language and machine translation efforts, called it a major milestone in one of the most challenging natural language processing tasks.

“Hitting human parity in a machine translation task is a dream that all of us have had,” Huang said. “We just didn’t realize we’d be able to hit it so soon.”

Attached: c0b98f4246264fc0e2df225b338be898--freckles-pretty-people.jpg (461x700, 51K)

Other urls found in this thread:

computerchess.org.uk/ccrl/4040/
twitter.com/SFWRedditImages

How does one actually train a ML model to translate? I thought you needed a clear "win" state to train one properly. How does it know when one sentence is translated more correctly than another when even humans will disagree on expressions and things?

I guess that's the point of natural language processing.

Pro tip
It doesn't even come close to actual semiotic translation that humans do, and never will, because it is based on a logic that can't do such things.
Processing is not translation.
Once again computer scientists fuck everything up with their autism.

>How does one actually train a ML model to translate?
You have a data set with known answers.
>How does it know when one sentence is translated more correctly than another when even humans will disagree on expressions and things?
They answer this in the article:
>To ensure the results were both accurate and on par with what people would have done, the team hired external bilingual human evaluators, who compared Microsoft’s results to two independently produced human reference translations.
So insofar as the machine translation might be something an evaluator wouldn't find ideal, the same situation could come up with one of the human reference translations being evaluated. The goal isn't to be perfect, it's to be rated equal to or greater than the human translations by the evaluators. In general it learns the same way we learn, which is through exposure to examples.

Also if some approach to translating a phrase is something atypical, then by definition it will tend to be less represented in the known data set. And if some approach to translating a phrase is something more typical, then by definition it will tend to be more represented in the known data set.
So again, not that different from the general way we learn what sounds normal vs. what sounds stilted.

muh magical human brain

Must be a bitch to collect a dataset of correct translations. I'm surprised at the speed of their development. Exciting stuff though user.

>Computers understand what words mean
Oh sweetheart

Attached: PeirceStandingFistOnHip(2).png (737x1768, 990K)

muh chinese room

>Implying
Searle doesn't know what he is talking about either
>Muh mystery

>this whole thread
Oh, sweet honeys..

>More shitposts from the semiotics faggot
It performed the translation task at human parity. Your stance is poorly defined garbage and I guarantee it will continue to fall into obscurity as a historical footnote on misguided attempts to approach the AI enterprise as long as programs get results like this. People very similar to you were saying programs couldn't ever beat human players at chess not too long ago. You can cry all you want about how it doesn't count, but at the end of the day results are what will count, not cries of "machines can't know nuffin" or "m-muh meaning!"

Hey looks like I caught you right as you replied.
Personally I think this shit is cool. It's not translation, and it's nothing like how humans translate words or any signs. Actually it's pretty evident that genes also translate in the same way.
You probably don't even really appreciate what semiotic translation is.
You're stupid, don't understand what I am talking about, and should piss off.

I'm just glad the people like liberal women are losing their jobs. I think it is funny. Fuck em.

I just care about results. Trying to approach intelligence and learning in terms of symbolic representation was a dead end, that's why everything is ML now. Nobody really sits there and figures out exact symbolic representations in everyday life, so it makes sense that approach would be as shit as it was in trying to recreate tasks people can do artificially. Learning is best done by exposure, even outside of AI people know that when it comes to learning a new language for example. The old method of teaching kids all the grammar and syntax and trying to make them conceptualize the new language consistently nets you worse results than simple immersion would.

Think about music. Our brains didn't develop music via evolution and specific genetics. It's simply brain pattern recognition of sound patterns.

A lot of intelligence "just works". It's not going to be this super complicated thing. Once the general basic lowest working algorithm for AGI works it will blast off.

I'm not trying to shit on AI (ALTHOUGH AI IS NOT INTELLEGENT AND NEVER WILL BE, BEYOND ITS SYMBIOTIC ROLE TO OUR EVOLUTION. AS TECHNOLOGY AI IS INTELLEGENT ONLY TO THE EXTENT IT IS POSSESSED BY OUR INTELLEGENCE. IT IS BEYOND THE LIMITS OF SYMBOLIC LOGIC TO PRODUCE ACTUAL INTELLEGENCE)
But I agree machine "learning" is cool and surely useful.

>AS TECHNOLOGY AI IS INTELLEGENT ONLY TO THE EXTENT IT IS POSSESSED BY OUR INTELLEGENCE
>AlphaGo can win games way beyond the capacity of any of its programmers
This is where you backpedal and say that go/chess have nothing to do with intelligence.

>translating gookanese into any non-garbage language
fuckin lies

Attached: b36.png (420x420, 7K)

>This is where you backpedal and say that go/chess have nothing to do with intelligence.
Yeah. This is really old news, chess grandmasters were laughing at how bad AI was and how it could never beat mighty organic human brains just a few decades back. Then of course it not only stopped being a joke and not only started playing on par with humans, but instead began dominating even the greatest of human chess players. Consistently too, over 40 different chess programs today are running a 3K+ ELO:
computerchess.org.uk/ccrl/4040/
Number of human chess players to rate a 3K+ ELO: 0.
Only 9 human chess players registered at a 2800+ score. Bobby Fischer didn't even break into 2K territory, nor have most of the best chess grandmasters historically.
Basically I will bet you all my life savings that any random game played today between the best ranked chess AI and a human of your choice will result in that chess AI winning. Would be easy fucking money, bio-brains cannot compete. If you look at the history of human / machine chess matches, humans have mostly given up. Last few major matches all involved handicapping the machine player because otherwise it would just be slaughter.

Chess isn't that impressive as brute forcing is essentially cheating.

Go on the other hand is a much more compelling argument

didn't deepmind take #1 chess spot?

Okay. You continue to miss my point entirely.

Not when a computer plays them, it's just processing.
Show me one computer that knows it is playing chess or knows anything at all, you literally cannot.l binary logic can't do that of relationship.

>Show me one computer that knows it is playing chess
>or knows anything at all
Provide a rigorous definition for "knows." No appeals to common sense or intuition, show exactly what that word your using is meant to entail. I don't think you can, but if you hypothetically could then you'd be able to show exactly what the program is lacking. And if you can't then you have no business bringing it up as a complaint until / unless you figure out exactly what the criteria for success is here.

prove you know things faggot

still got the job done, and its not plain bruteforcing brainlet

AlphaZero doesn't brute-force, it's pattern recognition

NO
ABSOLUTELY NOT.
S.M.H

chinese room is the most retarded thoght experiment Ive ever heard of.
The human doing the translation does not understand chinese but that doesnt matte because the human is the equivalent of a computer. The software he is running though (manually) does in fact understand chinese by any means.

Exactly.

So ironically you've confirmed that while the program might or might not "know" something, you definitely don't "know" what the words you're using are supposed to mean.

Holy shit that's a great point, thanks user

yea whoever came up with this couldnt even think that far
fucking philosophers

actually this is even below philosopher tier because it literally only takes "read a bunch of sci fi" level understanding to see that it doesnt have a point

Actually I'm struggling to avoid wasting away in this shithole when I have important things to do with my life.
You of all people should understand user.
Straining my mind for a text wall explaining a very difficult topic that you seem to have almost no background in would be a waste both of us and counterproductive. I get that's what eeks me to post here anyway but I'm sick of it, it's spring and I have a life to live.
I'm sure you are smart and are more than capable of figuring out what I am talking about from the various name drops I made in their context, if you truly want to that is. If you want to read what I have to say just to argue against it, too bad I refuse to take part in such things anymore.

Attached: Sd7AQsG.gif (720x404, 609K)

>being scared of ML

Ask me how i know you know nothing about ML. Big buzzword

>freckless pretty people
>she has no freckles
what did op mean by this

And people translate things wrong, people miss euphemisms or allusions or figures of speech
Real time translating will be pretty nice and useful though

Who's the slut?

If it's that much of a mental strain to even articulate what you believe in let alone why you believe it then you should probably be a lot less confident your beliefs make sense.

who are you quoting

Current ML techniques sure.
That's like saying are you scared of 1% of the genome being mapped.

People with actual brains can see that the current ML techniques don't encompass all possibilities.

The better question is. Look at ML right now. No one has an "AGI" architecture that is just hardware constrained. Meaning the discovery of said or closer to said algorithm can pop up at any moment. The predictability of said development is also not certain as with hardware improvement.

Meaning Yes, reasonable people, aka not animal low IQ fucks, take for example Elon Musk, understand the potential and that it could arrive at any moment. Given how big the potential is even small chances should make you tremble as if before God. That word, God, does more to describe runaway AGI than any other word we have.

this

t. Translator about to be unemployed

Lemme me know when it can scan a manga and produce passable English text.

>Xuedong Huang
But chinks already translate at machine level.

i buhlieb OP, i buhlieb!
trabsper me in muh minds ist reddit

Attached: AI TRANSFER mindfuck.png (1809x910, 2.15M)

>AGI: I'll watch anime instead of doing your shit

learning japanese was a mistake, lulz

No way, pretty much half of content is lost in translation, no translation can fix this.