TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Natural Language Inference/MultiNLI Dev

Natural Language Inference on MultiNLI Dev

Metric: Mismatched (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Mismatched▼Extra DataPaperDate↕Code
1TinyBERT-6 67M84.5NoTinyBERT: Distilling BERT for Natural Language U...2019-09-23Code
2BERT-Large-uncased-PruneOFA (90% unstruct sparse)84.2NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
3BERT-Large-uncased-PruneOFA (90% unstruct sparse, QAT Int8)84.08NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
4BERT-Base-uncased-PruneOFA (85% unstruct sparse)83.67NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
5BERT-Base-uncased-PruneOFA (85% unstruct sparse, QAT Int8)82.51NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
6BERT-Base-uncased-PruneOFA (90% unstruct sparse)82.43NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
7DistilBERT-uncased-PruneOFA (85% unstruct sparse)82.03NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
8DistilBERT-uncased-PruneOFA (90% unstruct sparse)81.47NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
9DistilBERT-uncased-PruneOFA (85% unstruct sparse, QAT Int8)81.14NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code
10DistilBERT-uncased-PruneOFA (90% unstruct sparse, QAT Int8)80.4NoPrune Once for All: Sparse Pre-Trained Language ...2021-11-10Code