Neural Machine Translation

Randall Bruder Photography

It didn’t take too long for computers to learn how to beat humans at chess. By the late 1980s they were already bitting and byting human players all around the board. The final victory of machine over mind came in 1997 when the supercomputer “Deep Blue” gave world chess champion Garry Kasparov a run for his money.

Extending that achievement into the area of automated translation has proven a much tougher proposition, however. The intricacies of chess strategies are mere tiddlywinks compared with the subtleties and nuances of language.

The first serious attempts to automate translation started in the 1950s, when a translator-free future was thought to be just around the corner. It didn’t work out.

Information technology made big leaps and bounds over the following three decades and in the 1980s statistical machine translation (SMT) was born. It uses large volumes of data to construct specific language pairs and was the idea behind Google Translate when it was built in the early 2000s.

The technology has been refined and improved since then, but the hoped-for quantum leap to a fully automated and reliable translation system has yet to materialise. Human translators have still had their hands full “post editing” machine-translated text.

But there is a new kid on the block that might take humanity a step closer to automated translation.

Neural machine translation (NMT) uses the same starting point as its predecessor, SMT, but this technology builds on its experience as it goes along. This “self-learning” makes the technology far more flexible and able to cope with the inconsistencies and diversions of any language.

The output of NMT is said to be more natural sounding and easier to edit than the sometimes clunky output of its predecessors. It is able to translate the semantic meaning of entire sentences, rather than cobbling together individual words and phrases.

This is a big step forward, but there is a way to go yet. NMT still processes only one sentence at a time, so can’t yet take the wider context – let alone knowledge of the world – into account. And the system sometimes randomly adds or omits chunks of text for no apparent reason. In fact it’s not fully clear how NMT teaches itself, something that is simultaneously scary and exciting!

For now, the technology is in its infancy, but it definitely has its feet under the desk as the world’s biggest IT companies (Google, Microsoft, Facebook) start to work out ways to apply it.

It will not be replacing human translators any time soon, but we are watching developments carefully because NMT could bring some fundamental changes to what we translate and how we do it. The future of automated translation may well have arrived, but automatic translation requiring no human input definitely has not.


Featured Posts
Recent Posts
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square