Ben Krause, Emmanuel Kahembwe, Iain Murray, Steve Renals
We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Language Modelling | Penn Treebank (Word Level) | Test perplexity | 51.1 | AWD-LSTM + dynamic eval |
| Language Modelling | Penn Treebank (Word Level) | Validation perplexity | 51.6 | AWD-LSTM + dynamic eval |
| Language Modelling | Text8 | Bit per Character (BPC) | 1.19 | mLSTM + dynamic eval |
| Language Modelling | Hutter Prize | Bit per Character (BPC) | 1.08 | mLSTM + dynamic eval |
| Language Modelling | WikiText-2 | Test perplexity | 44.3 | AWD-LSTM + dynamic eval |
| Language Modelling | WikiText-2 | Validation perplexity | 46.4 | AWD-LSTM + dynamic eval |