Google is increasingly adding machine learning to many of its core products. The latest to benefit is Translate with a new Google Neural Machine Translation (GNMT) system that reduces errors by 55% to 85%. The system is already in use 18 million times per day with the notoriously hard language pairing of Chinese to English.
Compared to the previous phrase-based production system, GNMT reduces translation errors by 55% to 85% on several major language pairs. Google tested the system by conducting side-by-side comparisons of sampled sentences from Wikipedia and news websites with the help of bilingual human raters.
While GMNT is a huge leap forward, machine translation in general still suffers from notable errors that a human translator would never make, including dropping words and mistranslating proper names or rare terms, and translating sentences in isolation rather than considering the context of the paragraph or page.
Until recently, neural networks were not even fast enough for real world deployment in products used by actual people. However, machine learning toolkit TensorFlow and Tensor Processing Units (TPUs) hardware made it possible for Google to begin using the new system. Both advancements provide enough computational power to deploy the GNMT models while meeting the stringent latency requirements of Google Translate.
The Google Translate mobile and web apps are now using GNMT for 100% of machine translations from Chinese to English. Over the coming months, GNMT will be rolled out to the other 10,000 language pairs that Translate supports.