Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Natural Language Processing
/
Question Answering
/
SQuAD2.0 dev
Question Answering on SQuAD2.0 dev
Metric: EM (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Hide extra data
Export CSV
#
Model
↕
EM
▼
Extra Data
Paper
Date
↕
Code
1
XLNet (single model)
87.9
No
XLNet: Generalized Autoregressive Pretraining fo...
2019-06-19
Code
2
XLNet+DSC
87.65
No
Dice Loss for Data-imbalanced NLP Tasks
2019-11-07
Code
3
RoBERTa (no data aug)
86.5
Yes
RoBERTa: A Robustly Optimized BERT Pretraining A...
2019-07-26
Code
4
ALBERT xxlarge
85.1
No
ALBERT: A Lite BERT for Self-supervised Learning...
2019-09-26
Code
5
SG-Net
85.1
No
SG-Net: Syntax-Guided Machine Reading Comprehens...
2019-08-14
Code
6
ALBERT xlarge
83.1
No
ALBERT: A Lite BERT for Self-supervised Learning...
2019-09-26
Code
7
SemBERT large
80.9
No
Semantics-aware BERT for Language Understanding
2019-09-05
Code
8
ALBERT large
79
No
ALBERT: A Lite BERT for Self-supervised Learning...
2019-09-26
Code
9
ALBERT base
76.1
No
ALBERT: A Lite BERT for Self-supervised Learning...
2019-09-26
Code
10
RMR + ELMo (Model-III)
72.3
No
Read + Verify: Machine Reading Comprehension wit...
2018-08-17
-
11
U-Net
70.3
No
U-Net: Machine Reading Comprehension with Unansw...
2018-10-12
Code
12
TinyBERT-6 67M
69.9
No
TinyBERT: Distilling BERT for Natural Language U...
2019-09-23
Code