TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Distilling Virtual Examples for Long-tailed Recognition

Distilling Virtual Examples for Long-tailed Recognition

Yin-Yin He, Jianxin Wu, Xiu-Shen Wei

2021-03-28ICCV 2021 10Long-tail LearningKnowledge Distillation
PaperPDFCode

Abstract

We tackle the long-tailed visual recognition problem from the knowledge distillation perspective by proposing a Distill the Virtual Examples (DiVE) method. Specifically, by treating the predictions of a teacher model as virtual examples, we prove that distilling from these virtual examples is equivalent to label distribution learning under certain constraints. We show that when the virtual example distribution becomes flatter than the original input distribution, the under-represented tail classes will receive significant improvements, which is crucial in long-tailed recognition. The proposed DiVE method can explicitly tune the virtual example distribution to become flat. Extensive experiments on three benchmark datasets, including the large-scale iNaturalist ones, justify that the proposed DiVE method can significantly outperform state-of-the-art methods. Furthermore, additional analyses and experiments verify the virtual example interpretation, and demonstrate the effectiveness of tailored designs in DiVE for long-tailed problems.

Results

TaskDatasetMetricValueModel
Image ClassificationImageNet-LTTop-1 Accuracy57.12RIDE-DiVE
Image ClassificationImageNet-LTTop-1 Accuracy53.1DiVE
Few-Shot Image ClassificationImageNet-LTTop-1 Accuracy57.12RIDE-DiVE
Few-Shot Image ClassificationImageNet-LTTop-1 Accuracy53.1DiVE
Generalized Few-Shot ClassificationImageNet-LTTop-1 Accuracy57.12RIDE-DiVE
Generalized Few-Shot ClassificationImageNet-LTTop-1 Accuracy53.1DiVE
Long-tail LearningImageNet-LTTop-1 Accuracy57.12RIDE-DiVE
Long-tail LearningImageNet-LTTop-1 Accuracy53.1DiVE
Generalized Few-Shot LearningImageNet-LTTop-1 Accuracy57.12RIDE-DiVE
Generalized Few-Shot LearningImageNet-LTTop-1 Accuracy53.1DiVE

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14KAT-V1: Kwai-AutoThink Technical Report2025-07-11Towards Collaborative Fairness in Federated Learning Under Imbalanced Covariate Shift2025-07-11SFedKD: Sequential Federated Learning with Discrepancy-Aware Multi-Teacher Knowledge Distillation2025-07-11