Metric: ROUGE-L (higher is better)
| # | Model↕ | ROUGE-L▼ | Extra Data | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | RNES w/o coherence | 37.75 | No | Learning to Extract Coherent Summary via Deep Re... | 2018-04-19 | - |
| 2 | SWAP-NET | 37.7 | No | - | - | Code |
| 3 | HSSAS | 37.6 | No | A Hierarchical Structured Self-Attentive Model f... | 2018-05-20 | - |
| 4 | ML+RL ROUGE+Novel, with LM | 37.44 | No | Improving Abstraction in Text Summarization | 2018-08-23 | - |
| 5 | rnn-ext + abs + RL + rerank | 37.34 | No | Fast Abstractive Summarization with Reinforce-Se... | 2018-05-28 | Code |
| 6 | ML+RL, with intra-attention | 36.9 | No | A Deep Reinforced Model for Abstractive Summariz... | 2017-05-11 | Code |
| 7 | GAN | 36.71 | No | Generative Adversarial Network for Abstractive T... | 2017-11-26 | Code |
| 8 | Fastformer | 36.21 | No | Fastformer: Additive Attention Can Be All You Need | 2021-08-20 | Code |
| 9 | KIGN+Prediction-guide | 35.68 | No | - | - | - |
| 10 | Lead-3 baseline | 35.5 | No | SummaRuNNer: A Recurrent Neural Network based Se... | 2016-11-14 | Code |
| 11 | SummaRuNNer | 35.3 | No | SummaRuNNer: A Recurrent Neural Network based Se... | 2016-11-14 | Code |
| 12 | Tan et al. | 34 | No | - | - | - |
| 13 | words-lvt2k-temp-att | 32.65 | No | Abstractive Text Summarization Using Sequence-to... | 2016-02-19 | Code |