TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Reading Comprehension/BIG-bench

Reading Comprehension on BIG-bench

Metric: Accuracy (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Accuracy▼Extra DataPaperDate↕Code
1Chinchilla-70B (few-shot, k=5)78NoTraining Compute-Optimal Large Language Models2022-03-29Code
2Chinchilla-70B (few-shot, k=5)75NoTraining Compute-Optimal Large Language Models2022-03-29Code
3Gopher-280B (few-shot, k=5)71.6NoScaling Language Models: Methods, Analysis & Ins...2021-12-08Code
4Gopher-280B (few-shot, k=5)62NoScaling Language Models: Methods, Analysis & Ins...2021-12-08Code
5Gopher-280B (few-shot, k=5)61.4NoScaling Language Models: Methods, Analysis & Ins...2021-12-08Code
6Chinchilla-70B (few-shot, k=5)52.6NoTraining Compute-Optimal Large Language Models2022-03-29Code
7Gopher-280B (few-shot, k=5)41.4NoScaling Language Models: Methods, Analysis & Ins...2021-12-08Code