TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Question Answering/TyDiQA-GoldP

Question Answering on TyDiQA-GoldP

Metric: EM (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕EM▼Extra DataPaperDate↕Code
1ByT5 (fine-tuned)81.9NoByT5: Towards a token-free future with pre-train...2021-05-28Code
2U-PaLM 62B (fine-tuned)78.4NoTranscending Scaling Laws with 0.1% Extra Compute2022-10-20-
3Flan-U-PaLM 540B (direct-prompting)68.3NoScaling Instruction-Finetuned Language Models2022-10-20Code
4Flan-PaLM 540B (direct-prompting)67.8NoScaling Instruction-Finetuned Language Models2022-10-20Code
5ByT5 XXL60NoByT5: Towards a token-free future with pre-train...2021-05-28Code
6U-PaLM-540B (CoT)54.6NoTranscending Scaling Laws with 0.1% Extra Compute2022-10-20-
7PaLM-540B (CoT)52.9NoPaLM: Scaling Language Modeling with Pathways2022-04-05Code
8Decoupled42.8NoRethinking embedding coupling in pre-trained lan...2020-10-24Code