TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Natural Language Inference/MultiNLI Dev

Natural Language Inference on MultiNLI Dev

Metric: Matched (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Matched▼Extra DataPaperDate↕Code
1TinyBERT-6 67M84.5NoTinyBERT: Distilling BERT for Natural Language U...2019-09-23Code
2BERT-Large-uncased-PruneOFA (90% unstruct sparse)83.74NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
3BERT-Large-uncased-PruneOFA (90% unstruct sparse, QAT Int8)83.47NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
4BERT-Base-uncased-PruneOFA (85% unstruct sparse)82.71NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
5BERT-Base-uncased-PruneOFA (90% unstruct sparse)81.45NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
6BERT-Base-uncased-PruneOFA (85% unstruct sparse, QAT Int8)81.4NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
7DistilBERT-uncased-PruneOFA (85% unstruct sparse)81.35NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
8DistilBERT-uncased-PruneOFA (90% unstruct sparse)80.68NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
9DistilBERT-uncased-PruneOFA (85% unstruct sparse, QAT Int8)80.66NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
10DistilBERT-uncased-PruneOFA (90% unstruct sparse, QAT Int8)78.8NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code