TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Rethinking the Value of Labels for Improving Class-Imbalan...

Rethinking the Value of Labels for Improving Class-Imbalanced Learning

Yuzhe Yang, Zhi Xu

2020-06-13NeurIPS 2020 12Long-tail Learning
PaperPDFCode(official)

Abstract

Real-world data often exhibits long-tailed distributions with heavy class imbalance, posing great challenges for deep recognition models. We identify a persisting dilemma on the value of labels in the context of imbalanced learning: on the one hand, supervision from labels typically leads to better results than its unsupervised counterparts; on the other hand, heavily imbalanced data naturally incurs "label bias" in the classifier, where the decision boundary can be drastically altered by the majority classes. In this work, we systematically investigate these two facets of labels. We demonstrate, theoretically and empirically, that class-imbalanced learning can significantly benefit in both semi-supervised and self-supervised manners. Specifically, we confirm that (1) positively, imbalanced labels are valuable: given more unlabeled data, the original labels can be leveraged with the extra data to reduce label bias in a semi-supervised manner, which greatly improves the final classifier; (2) negatively however, we argue that imbalanced labels are not useful always: classifiers that are first pre-trained in a self-supervised manner consistently outperform their corresponding baselines. Extensive experiments on large-scale imbalanced datasets verify our theoretically grounded strategies, showing superior performance over previous state-of-the-arts. Our intriguing findings highlight the need to rethink the usage of imbalanced labels in realistic long-tailed tasks. Code is available at https://github.com/YyzHarry/imbalanced-semi-self.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10-LT (ρ=10)Error Rate11.47LDAM-DRW + SSP
Image ClassificationCIFAR-100-LT (ρ=50)Error Rate52.89LDAM-DRW + SSP
Image ClassificationCIFAR-100-LT (ρ=10)Error Rate41.09LDAM-DRW + SSP
Image ClassificationImageNet-LTTop-1 Accuracy51.3cRT + SSP
Image ClassificationCIFAR-100-LT (ρ=100)Error Rate56.57LDAM-DRW + SSP
Image ClassificationCIFAR-10-LT (ρ=100)Error Rate22.17LDAM-DRW + SSP
Few-Shot Image ClassificationCIFAR-10-LT (ρ=10)Error Rate11.47LDAM-DRW + SSP
Few-Shot Image ClassificationCIFAR-100-LT (ρ=50)Error Rate52.89LDAM-DRW + SSP
Few-Shot Image ClassificationCIFAR-100-LT (ρ=10)Error Rate41.09LDAM-DRW + SSP
Few-Shot Image ClassificationImageNet-LTTop-1 Accuracy51.3cRT + SSP
Few-Shot Image ClassificationCIFAR-100-LT (ρ=100)Error Rate56.57LDAM-DRW + SSP
Few-Shot Image ClassificationCIFAR-10-LT (ρ=100)Error Rate22.17LDAM-DRW + SSP
Generalized Few-Shot ClassificationCIFAR-10-LT (ρ=10)Error Rate11.47LDAM-DRW + SSP
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=50)Error Rate52.89LDAM-DRW + SSP
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=10)Error Rate41.09LDAM-DRW + SSP
Generalized Few-Shot ClassificationImageNet-LTTop-1 Accuracy51.3cRT + SSP
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=100)Error Rate56.57LDAM-DRW + SSP
Generalized Few-Shot ClassificationCIFAR-10-LT (ρ=100)Error Rate22.17LDAM-DRW + SSP
Long-tail LearningCIFAR-10-LT (ρ=10)Error Rate11.47LDAM-DRW + SSP
Long-tail LearningCIFAR-100-LT (ρ=50)Error Rate52.89LDAM-DRW + SSP
Long-tail LearningCIFAR-100-LT (ρ=10)Error Rate41.09LDAM-DRW + SSP
Long-tail LearningImageNet-LTTop-1 Accuracy51.3cRT + SSP
Long-tail LearningCIFAR-100-LT (ρ=100)Error Rate56.57LDAM-DRW + SSP
Long-tail LearningCIFAR-10-LT (ρ=100)Error Rate22.17LDAM-DRW + SSP
Generalized Few-Shot LearningCIFAR-10-LT (ρ=10)Error Rate11.47LDAM-DRW + SSP
Generalized Few-Shot LearningCIFAR-100-LT (ρ=50)Error Rate52.89LDAM-DRW + SSP
Generalized Few-Shot LearningCIFAR-100-LT (ρ=10)Error Rate41.09LDAM-DRW + SSP
Generalized Few-Shot LearningImageNet-LTTop-1 Accuracy51.3cRT + SSP
Generalized Few-Shot LearningCIFAR-100-LT (ρ=100)Error Rate56.57LDAM-DRW + SSP
Generalized Few-Shot LearningCIFAR-10-LT (ρ=100)Error Rate22.17LDAM-DRW + SSP

Related Papers

Mitigating Spurious Correlations with Causal Logit Perturbation2025-05-21LIFT+: Lightweight Fine-Tuning for Long-Tail Learning2025-04-17Improving Visual Prompt Tuning by Gaussian Neighborhood Minimization for Long-Tailed Visual Recognition2024-10-28Learning from Neighbors: Category Extrapolation for Long-Tail Learning2024-10-21Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition2024-10-08AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation2024-09-30Representation Norm Amplification for Out-of-Distribution Detection in Long-Tail Learning2024-08-20LTRL: Boosting Long-tail Recognition via Reflective Learning2024-07-17