Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Knowledge Base
/
Text Summarization
/
GigaWord
Text Summarization on GigaWord
Metric: ROUGE-2 (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Hide extra data
Export CSV
#
Model
↕
ROUGE-2
▼
Extra Data
Paper
Date
↕
Code
1
OpenAI/o3-mini
54.22
No
-
-
-
2
Riple/Saanvi-v0.1
45.58
No
-
-
-
3
Pegasus+DotProd
21
Yes
-
-
-
4
BART-RXF
20.69
No
Better Fine-Tuning by Reducing Representational ...
2020-08-06
Code
5
OFA
20.66
No
OFA: Unifying Architectures, Tasks, and Modaliti...
2022-02-07
Code
6
MUPPET BART Large
20.54
Yes
Muppet: Massive Multi-task Representations with ...
2021-01-26
Code
7
ControlCopying + SBWR
20.47
No
Controlling the Amount of Verbatim Copying in Ab...
2019-11-23
Code
8
Transformer+Wdrop
20.45
Yes
Rethinking Perturbations in Encoder-Decoders for...
2021-04-05
Code
9
ProphetNet
20.42
Yes
ProphetNet: Predicting Future N-gram for Sequenc...
2020-01-13
Code
10
Transformer+Rep(Uni)
20.4
Yes
Rethinking Perturbations in Encoder-Decoders for...
2021-04-05
Code
11
Best Summary Length
20.4
No
A New Approach to Overgenerating and Scoring Abs...
2021-04-05
Code
12
ControlCopying + BPNorm
20.38
No
Controlling the Amount of Verbatim Copying in Ab...
2019-11-23
Code
13
PALM
20.37
No
PALM: Pre-training an Autoencoding&Autoregressiv...
2020-04-14
Code
14
ERNIE-GENLARGE (large-scale text corpora)
20.34
Yes
ERNIE-GEN: An Enhanced Multi-Flow Pre-training a...
2020-01-26
Code
15
ERNIE-GENLARGE
20.25
Yes
ERNIE-GEN: An Enhanced Multi-Flow Pre-training a...
2020-01-26
Code
16
UniLM
20.05
Yes
Unified Language Model Pre-training for Natural ...
2019-05-08
Code
17
ERNIE-GENBASE
20.04
Yes
ERNIE-GEN: An Enhanced Multi-Flow Pre-training a...
2020-01-26
Code
18
PEGASUS
19.86
Yes
PEGASUS: Pre-training with Extracted Gap-sentenc...
2019-12-18
Code
19
BiSET
19.78
No
BiSET: Bi-directional Selective Encoding with Te...
2019-06-12
Code
20
MASS
19.71
Yes
MASS: Masked Sequence to Sequence Pre-training f...
2019-05-07
Code
21
Mask Attention Network
19.46
No
Mask Attention Networks: Rethinking and Strength...
2021-03-25
Code
22
Re^3 Sum
19.03
No
-
-
-
23
Transformer
18.9
No
Attention Is All You Need
2017-06-12
Code
24
JointParsing
18.85
No
Joint Parsing and Generation for Abstractive Sum...
2019-11-23
Code
25
Reinforced-Topic-ConvS2S
18.29
No
A Reinforced Topic-Aware Convolutional Sequence-...
2018-05-09
-
26
CGU
18
No
Global Encoding for Abstractive Summarization
2018-05-10
Code
27
Pointer + Coverage + EntailmentGen + QuestionGen
17.76
No
Soft Layer-Specific Multi-Task Summarization wit...
2018-05-28
-
28
words-lvt5k-1sent
17.7
No
Abstractive Text Summarization Using Sequence-to...
2016-02-19
Code
29
Struct+2Way+Word
17.66
No
Structure-Infused Copy Mechanisms for Abstractiv...
2018-06-14
Code
30
FTSum_g
17.65
No
Faithful to the Original: Fact Aware Neural Abst...
2017-11-13
-
31
DRGD
17.57
No
Deep Recurrent Generative Decoder for Abstractiv...
2017-08-02
Code
32
SEASS
17.54
No
Selective Encoding for Abstractive Sentence Summ...
2017-04-24
Code
33
EndDec+WFE
17.31
No
Cutting-off Redundant Repeating Generations for ...
2016-12-31
-
34
Seq2seq + selective + MTL + ERAM
17.27
No
-
-
-
35
Concept pointer+DS
17.1
No
Concept Pointer Network for Abstractive Summariz...
2019-10-18
Code
36
Concept pointer+RL
16.97
No
Concept Pointer Network for Abstractive Summariz...
2019-10-18
Code
37
Seq2seq + E2T_cnn
16.66
No
Entity Commonsense Representation for Neural Abs...
2018-06-14
Code
38
RAS-Elman
15.97
No
-
-
-
39
Contextual Match
10.05
No
Simple Unsupervised Summarization by Contextual ...
2019-07-31
Code