In the news: Facebook’s using new and faster translation methods including catching violations

30th January 2018

Translation has been one of Facebook’s most important features. As the company says:

“Language translation is important to Facebook’s mission of making the world more open and connected, enabling everyone to consume posts or videos in their preferred language — all at the highest possible accuracy and speed.”

Last week Facebook announced a new technique in its language translation which works faster and more accurately than its rival systems. It also helps Facebook catch problems across all languages more quickly.

This can be achieved by using a novel convolutional neural network (CNN) approach for language translation that achieves state-of-the-art accuracy at nine times the speed of recurrent neural systems. Programs that detect posts that are against Facebook’s policies will gain more accuracy across more languages, Facebook says, supporting the platform’s ongoing work against hate speech, terrorism, and fake news.

Usually, the AL powered translation relies on recurrent neural networks (RNNs) which are the incumbent technology for text applications and have been the top choice for language translation because of their high accuracy. The new research leverages convolutional neural networks (CNNs) instead. Though RNNs have historically outperformed CNNs at language translation tasks, their design has an inherent limitation, which can be understood by looking at how they process information.

RNNs translate text by reading a sentence in one language and predicting a sequence of words in another language with the same meaning. It operates in a strict left-to-right or right-to-left order, one word at a time.

CNNs on the other hand, compute all elements of data simultaneously, taking full advantage of GPU parallelism and making translation more efficient.

This style of computation is much better suited to the GPU hardware used to train most contemporary neural networks.

Better translation with multi-hop attention and gating

In the news: Facebook’s using new and faster translation methods including catching violations

A distinguishing component of this new architecture is multi-hop attention. An attention mechanism is similar to the way a person would break down a sentence when translating it. Instead of looking at the sentence only once and then writing down the full translation without checking it, the network takes repeated “glimpses” at the sentence to choose which words it will translate next, much like a human occasionally looks back at specific keywords when writing down a translation. Multi-hop attention is an enhanced version of this mechanism, which allows the network to make multiple such glimpses to produce better translation.

Gating controls the information flow in the neural network. In every neural network, information flows through so-called hidden units. Facebook gating mechanism controls exactly which information should be passed on to the next unit so that a good translation can be produced by taking into account the translation it has already produced.

The work exists solely as research at the moment — it hasn’t yet been implemented in a Facebook product but Facebook says it is likely to happen soon.

More information:

A novel approach to neural machine translation

Sign up to our newsletter

  • Here at Foreign Tongues we take your privacy seriously and we will only use your personal information to administer your account and to provide the products and services you have requested from us.

    From time to time we would like to email you with details of our services, latest translation and language trends, best practices, updates on recent surveys and studies and much more. If you consent us to emailing you for this purpose, please tick to confirm.