TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Is Synthetic Data From Diffusion Models Ready for Knowledg...

Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?

Zheng Li, YuXuan Li, Penghai Zhao, RenJie Song, Xiang Li, Jian Yang

2023-05-22Few-Shot LearningMitigating Contextual BiasData-free Knowledge DistillationKnowledge Distillation
PaperPDFCode(official)

Abstract

Diffusion models have recently achieved astonishing performance in generating high-fidelity photo-realistic images. Given their huge success, it is still unclear whether synthetic images are applicable for knowledge distillation when real images are unavailable. In this paper, we extensively study whether and how synthetic images produced from state-of-the-art diffusion models can be used for knowledge distillation without access to real images, and obtain three key conclusions: (1) synthetic data from diffusion models can easily lead to state-of-the-art performance among existing synthesis-based distillation methods, (2) low-fidelity synthetic images are better teaching materials, and (3) relatively weak classifiers are better teachers. Code is available at https://github.com/zhengli97/DM-KD.

Results

TaskDatasetMetricValueModel
Few-Shot LearningStanford Cars12-shot Accuracy83.9Real-Guidance + CAL
Few-Shot LearningStanford Cars16-shot Accuracy88.3Real-Guidance + CAL
Few-Shot LearningStanford Cars4-shot Accuracy44.3Real-Guidance + CAL
Few-Shot LearningStanford Cars8-shot Accuracy73.1Real-Guidance + CAL
Few-Shot LearningFGVC Aircraft12-shot Accuracy65.8Real-Guidance + CAL
Few-Shot LearningFGVC Aircraft16-shot Accuracy72.5Real-Guidance + CAL
Few-Shot LearningFGVC Aircraft4-shot Accuracy34.5Real-Guidance + CAL
Few-Shot LearningFGVC Aircraft8-shot Accuracy54.6Real-Guidance + CAL
Few-Shot LearningFGVC AircraftHarmonic mean34.5Real-Guidance + CAL
Few-Shot LearningDTD12-shot Accuracy54.5Real-Guidance + CAL
Few-Shot LearningDTD16-shot Accuracy57.4Real-Guidance + CAL
Few-Shot LearningDTD4-shot Accuracy41.5Real-Guidance + CAL
Few-Shot LearningDTD8-shot Accuracy50.6Real-Guidance + CAL
Meta-LearningStanford Cars12-shot Accuracy83.9Real-Guidance + CAL
Meta-LearningStanford Cars16-shot Accuracy88.3Real-Guidance + CAL
Meta-LearningStanford Cars4-shot Accuracy44.3Real-Guidance + CAL
Meta-LearningStanford Cars8-shot Accuracy73.1Real-Guidance + CAL
Meta-LearningFGVC Aircraft12-shot Accuracy65.8Real-Guidance + CAL
Meta-LearningFGVC Aircraft16-shot Accuracy72.5Real-Guidance + CAL
Meta-LearningFGVC Aircraft4-shot Accuracy34.5Real-Guidance + CAL
Meta-LearningFGVC Aircraft8-shot Accuracy54.6Real-Guidance + CAL
Meta-LearningFGVC AircraftHarmonic mean34.5Real-Guidance + CAL
Meta-LearningDTD12-shot Accuracy54.5Real-Guidance + CAL
Meta-LearningDTD16-shot Accuracy57.4Real-Guidance + CAL
Meta-LearningDTD4-shot Accuracy41.5Real-Guidance + CAL
Meta-LearningDTD8-shot Accuracy50.6Real-Guidance + CAL
ClassificationFGVC AircraftOOD Accuracy (%)17.7CAL + Real-Guidance
ClassificationFGVC AircraftTop-1 Accuracy (%)71.7CAL + Real-Guidance

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14KAT-V1: Kwai-AutoThink Technical Report2025-07-11Towards Collaborative Fairness in Federated Learning Under Imbalanced Covariate Shift2025-07-11