Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Natural Language Processing
/
Document Summarization
/
CNN / Daily Mail
Document Summarization on CNN / Daily Mail
Metric: ROUGE-2 (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Hide extra data
Export CSV
#
Model
↕
ROUGE-2
▼
Extra Data
Paper
Date
↕
Code
1
PEGASUS + SummaReranker
22.55
No
SummaReranker: A Multi-Task Mixture-of-Experts R...
2022-03-13
Code
2
Fourier Transformer
21.55
No
Fourier Transformer: Fast Long Range Modeling by...
2023-05-24
Code
3
T5-11B
21.55
Yes
Exploring the Limits of Transfer Learning with a...
2019-10-23
Code
4
GLM-XXLarge
21.4
No
GLM: General Language Model Pretraining with Aut...
2021-03-18
Code
5
Hie-BART
21.37
No
-
-
-
6
HAT-BART
21.31
No
Hierarchical Learning for Generation with Long S...
2021-04-15
-
7
BigBird-Pegasus
21.11
No
Big Bird: Transformers for Longer Sequences
2020-07-28
Code
8
MatchSum (RoBERTa-base)
20.86
No
Extractive Summarization as Text Matching
2020-04-19
Code
9
MatchSum (BERT-base)
20.62
No
Extractive Summarization as Text Matching
2020-04-19
Code
10
UniLM (Abstractive Summarization)
20.43
Yes
Unified Language Model Pre-training for Natural ...
2019-05-08
Code
11
BertSumExt
20.34
Yes
Text Summarization with Pretrained Encoders
2019-08-22
Code
12
BERTSUM+Transformer
20.24
Yes
Fine-tune BERT for Extractive Summarization
2019-03-25
Code
13
Scrambled code + broken (alter)
19.84
No
Universal Evasion Attacks on Summarization Scoring
2022-10-25
Code
14
NeuSUM
19.01
No
Neural Document Summarization by Jointly Learnin...
2018-07-06
Code
15
TaLK Convolutions (Deep)
18.97
No
Time-aware Large Kernel Convolutions
2020-02-08
Code
16
Selector+Pointer Generator
18.74
No
Mixture Content Selection for Diverse Sequence G...
2019-09-04
Code
17
Bottom-Up Sum
18.68
No
Bottom-Up Abstractive Summarization
2018-08-31
Code
18
TaLK Convolutions (Standard)
18.45
No
Time-aware Large Kernel Convolutions
2020-02-08
Code
19
Lead-3
17.7
No
Get To The Point: Summarization with Pointer-Gen...
2017-04-14
Code
20
DynamicConv
16.25
No
Pay Less Attention with Lightweight and Dynamic ...
2019-01-29
Code
21
Synthesizer (R+V)
16.24
No
Synthesizer: Rethinking Self-Attention in Transf...
2020-05-02
Code
22
LightConv
15.97
No
Pay Less Attention with Lightweight and Dynamic ...
2019-01-29
Code
23
ML + RL (Paulus et al., 2017)
15.82
No
A Deep Reinforced Model for Abstractive Summariz...
2017-05-11
Code
24
C2F + ALTERNATE
15.4
No
-
-
-
25
ML + Intra-Attention (Paulus et al., 2017)
14.81
No
A Deep Reinforced Model for Abstractive Summariz...
2017-05-11
Code
26
GPT-2
8.27
Yes
-
-
Code