TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MDCS: More Diverse Experts with Consistency Self-distillat...

MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition

QiHao Zhao, Chen Jiang, Wei Hu, Fan Zhang, Jun Liu

2023-08-19ICCV 2023 1Long-tail Learning
PaperPDFCode(official)

Abstract

Recently, multi-expert methods have led to significant improvements in long-tail recognition (LTR). We summarize two aspects that need further enhancement to contribute to LTR boosting: (1) More diverse experts; (2) Lower model variance. However, the previous methods didn't handle them well. To this end, we propose More Diverse experts with Consistency Self-distillation (MDCS) to bridge the gap left by earlier methods. Our MDCS approach consists of two core components: Diversity Loss (DL) and Consistency Self-distillation (CS). In detail, DL promotes diversity among experts by controlling their focus on different categories. To reduce the model variance, we employ KL divergence to distill the richer knowledge of weakly augmented instances for the experts' self-distillation. In particular, we design Confident Instance Sampling (CIS) to select the correctly classified instances for CS to avoid biased/noisy knowledge. In the analysis and ablation study, we demonstrate that our method compared with previous work can effectively increase the diversity of experts, significantly reduce the variance of the model, and improve recognition accuracy. Moreover, the roles of our DL and CS are mutually reinforcing and coupled: the diversity of experts benefits from the CS, and the CS cannot achieve remarkable results without the DL. Experiments show our MDCS outperforms the state-of-the-art by 1% $\sim$ 2% on five popular long-tailed benchmarks, including CIFAR10-LT, CIFAR100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018. The code is available at https://github.com/fistyee/MDCS.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-100-LT (ρ=50)Error Rate39.9MDCS
Image ClassificationImageNet-LTTop-1 Accuracy61.8MDCS (ResNeXt-50)
Image ClassificationCIFAR-10-LT (ρ=50)Error Rate11.7MDCS
Image ClassificationCIFAR-100-LT (ρ=100)Error Rate43.9MDCS
Few-Shot Image ClassificationCIFAR-100-LT (ρ=50)Error Rate39.9MDCS
Few-Shot Image ClassificationImageNet-LTTop-1 Accuracy61.8MDCS (ResNeXt-50)
Few-Shot Image ClassificationCIFAR-10-LT (ρ=50)Error Rate11.7MDCS
Few-Shot Image ClassificationCIFAR-100-LT (ρ=100)Error Rate43.9MDCS
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=50)Error Rate39.9MDCS
Generalized Few-Shot ClassificationImageNet-LTTop-1 Accuracy61.8MDCS (ResNeXt-50)
Generalized Few-Shot ClassificationCIFAR-10-LT (ρ=50)Error Rate11.7MDCS
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=100)Error Rate43.9MDCS
Long-tail LearningCIFAR-100-LT (ρ=50)Error Rate39.9MDCS
Long-tail LearningImageNet-LTTop-1 Accuracy61.8MDCS (ResNeXt-50)
Long-tail LearningCIFAR-10-LT (ρ=50)Error Rate11.7MDCS
Long-tail LearningCIFAR-100-LT (ρ=100)Error Rate43.9MDCS
Generalized Few-Shot LearningCIFAR-100-LT (ρ=50)Error Rate39.9MDCS
Generalized Few-Shot LearningImageNet-LTTop-1 Accuracy61.8MDCS (ResNeXt-50)
Generalized Few-Shot LearningCIFAR-10-LT (ρ=50)Error Rate11.7MDCS
Generalized Few-Shot LearningCIFAR-100-LT (ρ=100)Error Rate43.9MDCS

Related Papers

Mitigating Spurious Correlations with Causal Logit Perturbation2025-05-21LIFT+: Lightweight Fine-Tuning for Long-Tail Learning2025-04-17Improving Visual Prompt Tuning by Gaussian Neighborhood Minimization for Long-Tailed Visual Recognition2024-10-28Learning from Neighbors: Category Extrapolation for Long-Tail Learning2024-10-21Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition2024-10-08AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation2024-09-30Representation Norm Amplification for Out-of-Distribution Detection in Long-Tail Learning2024-08-20LTRL: Boosting Long-tail Recognition via Reflective Learning2024-07-17