Xiaodong Liu, Kevin Duh, Liyuan Liu, Jianfeng Gao
We explore the application of very deep Transformer models for Neural Machine Translation (NMT). Using a simple yet effective initialization technique that stabilizes training, we show that it is feasible to build standard Transformer-based models with up to 60 encoder layers and 12 decoder layers. These deep models outperform their baseline 6-layer counterparts by as much as 2.5 BLEU, and achieve new state-of-the-art benchmark results on WMT14 English-French (43.8 BLEU and 46.4 BLEU with back-translation) and WMT14 English-German (30.1 BLEU).The code and trained models will be publicly available at: https://github.com/namisan/exdeep-nmt.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Machine Translation | WMT2014 English-German | BLEU score | 30.1 | Transformer (ADMIN init) |
| Machine Translation | WMT2014 English-German | SacreBLEU | 29.5 | Transformer (ADMIN init) |
| Machine Translation | WMT2014 English-French | BLEU score | 46.4 | Transformer+BT (ADMIN init) |
| Machine Translation | WMT2014 English-French | SacreBLEU | 44.4 | Transformer+BT (ADMIN init) |
| Machine Translation | WMT2014 English-French | BLEU score | 43.8 | Transformer (ADMIN init) |
| Machine Translation | WMT2014 English-French | SacreBLEU | 41.8 | Transformer (ADMIN init) |