Machine translation (MT) technology has gone through several phases. Starting with rule-based MT in the 1950s, it moved on in the 1990s to statistical machine translation that drew on the huge bodies of data able to be exploited thanks to advances in data storage. In the 20 years from 1990 to 2010, the focus has been on developing and refining the capabilities of the data-driven statistical model. The latest trend, neural machine translation, has created considerable excitement in the MT industry because of the quality improvements it promises.
Neural MT uses the concept of neural networks, similar to the structure of the axons in the human brain, to translate text. Initial results seem to indicate that languages like Arabic, Korean and Japanese, which have proven particularly challenging for MT to date, show significantly greater improvements in performance than languages that are easier for conventional MT systems to handle. Both Microsoft and Google translation services now use neural MT. While there will long continue to be a need for the quality of translation that only humans can produce, MT can fill the gap when huge amounts of text are needed to be translated quickly, or even in real time. But for now, the output of MT is still largely of the “quick and dirty” level of quality.