TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Knowledge Distillation/QNLI

Knowledge Distillation on QNLI

Metric: Accuracy (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Accuracy▼Extra DataPaperDate↕Code
1GOLD (T5-base)91.7NoGOLD: Generalized Knowledge Distillation via Out...2024-03-28-
2ZeroGen (T5-base)88.5NoZeroGen: Efficient Zero-shot Learning via Datase...2022-02-16Code
3ProGen (T5-base)85.9NoProGen: Progressive Zero-shot Dataset Generation...2022-10-22Code
4Prompt2Model (T5-base)62.2NoPrompt2Model: Generating Deployable Models from ...2023-08-23Code