Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Natural Language Processing
/
Question Answering
/
SQuAD2.0 dev
Question Answering on SQuAD2.0 dev
Metric: F1 (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Hide extra data
Export CSV
#
Model
↕
F1
▼
Extra Data
Paper
Date
↕
Code
1
XLNet (single model)
90.6
No
XLNet: Generalized Autoregressive Pretraining fo...
2019-06-19
Code
2
XLNet+DSC
89.51
No
Dice Loss for Data-imbalanced NLP Tasks
2019-11-07
Code
3
RoBERTa (no data aug)
89.4
Yes
RoBERTa: A Robustly Optimized BERT Pretraining A...
2019-07-26
Code
4
ALBERT xxlarge
88.1
No
ALBERT: A Lite BERT for Self-supervised Learning...
2019-09-26
Code
5
SG-Net
87.9
No
SG-Net: Syntax-Guided Machine Reading Comprehens...
2019-08-14
Code
6
SpanBERT
86.8
No
SpanBERT: Improving Pre-training by Representing...
2019-07-24
Code
7
ALBERT xlarge
85.9
No
ALBERT: A Lite BERT for Self-supervised Learning...
2019-09-26
Code
8
SemBERT large
83.6
No
Semantics-aware BERT for Language Understanding
2019-09-05
Code
9
ALBERT large
82.1
No
ALBERT: A Lite BERT for Self-supervised Learning...
2019-09-26
Code
10
ALBERT base
79.1
No
ALBERT: A Lite BERT for Self-supervised Learning...
2019-09-26
Code
11
RMR + ELMo (Model-III)
74.8
No
Read + Verify: Machine Reading Comprehension wit...
2018-08-17
-
12
U-Net
74
No
U-Net: Machine Reading Comprehension with Unansw...
2018-10-12
Code
13
TinyBERT-6 67M
73.4
No
TinyBERT: Distilling BERT for Natural Language U...
2019-09-23
Code