TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ProGen: Progressive Zero-shot Dataset Generation via In-co...

ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback

Jiacheng Ye, Jiahui Gao, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong

2022-10-22Text ClassificationData-free Knowledge DistillationInformativenesstext-classificationZero-Shot Learning
PaperPDFCodeCode(official)

Abstract

Recently, dataset-generation-based zero-shot learning has shown promising results by training a task-specific model with a dataset synthesized from large pre-trained language models (PLMs). The final task-specific model often achieves compatible or even better performance than PLMs under the zero-shot setting, with orders of magnitude fewer parameters. However, synthetic datasets have their drawbacks. They have long been suffering from low-quality issues (e.g., low informativeness and redundancy). This explains why the massive synthetic data does not lead to better performance -- a scenario we would expect in the human-labeled data. To improve the quality of dataset synthesis, we propose a progressive zero-shot dataset generation framework, ProGen, which leverages the feedback from the task-specific model to guide the generation of new training data via in-context examples. Extensive experiments on five text classification datasets demonstrate the effectiveness of the proposed approach. We also show ProGen achieves on-par or superior performance with only 1\% synthetic dataset size compared to baseline methods without in-context feedback.

Results

TaskDatasetMetricValueModel
Knowledge DistillationSQuADExact Match68.1ProGen (T5-base)
Knowledge DistillationQNLIAccuracy85.9ProGen (T5-base)
Data-free Knowledge DistillationSQuADExact Match68.1ProGen (T5-base)
Data-free Knowledge DistillationQNLIAccuracy85.9ProGen (T5-base)

Related Papers

Making Language Model a Hierarchical Classifier and Generator2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17DEARLi: Decoupled Enhancement of Recognition and Localization for Semi-supervised Panoptic Segmentation2025-07-14GNN-CNN: An Efficient Hybrid Model of Convolutional and Graph Neural Networks for Text Representation2025-07-10Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation2025-07-09LumiCRS: Asymmetric Contrastive Prototype Learning for Long-Tail Conversational Movie Recommendation2025-07-07The Trilemma of Truth in Large Language Models2025-06-30Robustness of Misinformation Classification Systems to Adversarial Examples Through BeamAttack2025-06-30