TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Reading Comprehension/RACE

Reading Comprehension on RACE

Metric: Accuracy (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Accuracy▼Extra DataPaperDate↕Code
1ALBERT (Ensemble)91.4NoImproving Machine Reading Comprehension with Sin...2020-11-06-
2Megatron-BERT (ensemble)90.9NoMegatron-LM: Training Multi-Billion Parameter La...2019-09-17Code
3ALBERTxxlarge+DUMA(ensemble)89.8NoDUMA: Reading Comprehension with Transposition T...2020-01-26Code
4Megatron-BERT89.5NoMegatron-LM: Training Multi-Billion Parameter La...2019-09-17Code
5DeBERTalarge86.8NoDeBERTa: Decoding-enhanced BERT with Disentangle...2020-06-05Code
6B10-10-1085.7NoFunnel-Transformer: Filtering out Sequential Red...2020-06-05Code
7RoBERTa83.2NoRoBERTa: A Robustly Optimized BERT Pretraining A...2019-07-26Code
8Orca 2-13B82.87NoOrca 2: Teaching Small Language Models How to Re...2023-11-18-
9Orca 2-7B80.79NoOrca 2: Teaching Small Language Models How to Re...2023-11-18-
10HAT (Encoder)67.3NoHierarchical Learning for Generation with Long S...2021-04-15-