TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Reading Comprehension/MuSeRC

Reading Comprehension on MuSeRC

Metric: EM (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕EM ▼Extra DataPaperDate↕Code
1Golden Transformer0.819No---
2ruRoberta-large finetune0.561No---
3RuGPT3XL few-shot0.546No---
4MT5 Large0.543NomT5: A massively multilingual pre-trained text-t...2020-10-22Code
5ruT5-large-finetune0.537No---
6ruT5-base-finetune0.446No---
7ruBert-large finetune0.427No---
8Human Benchmark0.42NoRussianSuperGLUE: A Russian Language Understandi...2020-10-29Code
9ruBert-base finetune0.399No---
10YaLM 1.0B few-shot0.364No---
11RuGPT3Large0.333No---
12SBERT_Large0.327No---
13RuBERT plain0.324No---
14SBERT_Large_mt_ru_finetuning0.319No---
15RuGPT3Medium0.308No---
16RuBERT conversational0.278No---
17Baseline TF-IDF1.10.242NoRussianSuperGLUE: A Russian Language Understandi...2020-10-29Code
18Multilingual Bert0.239No---
19heuristic majority0.237NoUnreasonable Effectiveness of Rule-Based Heurist...2021-05-03-
20RuGPT3Small0.221No---
21Random weighted0.071NoUnreasonable Effectiveness of Rule-Based Heurist...2021-05-03-
22majority_class0No---