TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Class-Wise Difficulty-Balanced Loss for Solving Class-Imba...

Class-Wise Difficulty-Balanced Loss for Solving Class-Imbalance

Saptarshi Sinha, Hiroki Ohashi, Katsuyuki Nakamura

2020-10-05Long-tail Learning
PaperPDFCode(official)

Abstract

Class-imbalance is one of the major challenges in real world datasets, where a few classes (called majority classes) constitute much more data samples than the rest (called minority classes). Learning deep neural networks using such datasets leads to performances that are typically biased towards the majority classes. Most of the prior works try to solve class-imbalance by assigning more weights to the minority classes in various manners (e.g., data re-sampling, cost-sensitive learning). However, we argue that the number of available training data may not be always a good clue to determine the weighting strategy because some of the minority classes might be sufficiently represented even by a small number of training data. Overweighting samples of such classes can lead to drop in the model's overall performance. We claim that the 'difficulty' of a class as perceived by the model is more important to determine the weighting. In this light, we propose a novel loss function named Class-wise Difficulty-Balanced loss, or CDB loss, which dynamically distributes weights to each sample according to the difficulty of the class that the sample belongs to. Note that the assigned weights dynamically change as the 'difficulty' for the model may change with the learning progress. Extensive experiments are conducted on both image (artificially induced class-imbalanced MNIST, long-tailed CIFAR and ImageNet-LT) and video (EGTEA) datasets. The results show that CDB loss consistently outperforms the recently proposed loss functions on class-imbalanced datasets irrespective of the data type (i.e., video or image).

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-100-LT (ρ=10)Error Rate41.26CDB-loss
Image ClassificationImageNet-LTTop-1 Accuracy38.5CDB-loss (ResNet 10)
Image ClassificationCIFAR-100-LT (ρ=100)Error Rate57.43CDB-loss
Image ClassificationEGTEAAverage Precision63.86CDB-loss (3D- ResNeXt101)
Image ClassificationEGTEAAverage Recall66.24CDB-loss (3D- ResNeXt101)
Few-Shot Image ClassificationCIFAR-100-LT (ρ=10)Error Rate41.26CDB-loss
Few-Shot Image ClassificationImageNet-LTTop-1 Accuracy38.5CDB-loss (ResNet 10)
Few-Shot Image ClassificationCIFAR-100-LT (ρ=100)Error Rate57.43CDB-loss
Few-Shot Image ClassificationEGTEAAverage Precision63.86CDB-loss (3D- ResNeXt101)
Few-Shot Image ClassificationEGTEAAverage Recall66.24CDB-loss (3D- ResNeXt101)
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=10)Error Rate41.26CDB-loss
Generalized Few-Shot ClassificationImageNet-LTTop-1 Accuracy38.5CDB-loss (ResNet 10)
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=100)Error Rate57.43CDB-loss
Generalized Few-Shot ClassificationEGTEAAverage Precision63.86CDB-loss (3D- ResNeXt101)
Generalized Few-Shot ClassificationEGTEAAverage Recall66.24CDB-loss (3D- ResNeXt101)
Long-tail LearningCIFAR-100-LT (ρ=10)Error Rate41.26CDB-loss
Long-tail LearningImageNet-LTTop-1 Accuracy38.5CDB-loss (ResNet 10)
Long-tail LearningCIFAR-100-LT (ρ=100)Error Rate57.43CDB-loss
Long-tail LearningEGTEAAverage Precision63.86CDB-loss (3D- ResNeXt101)
Long-tail LearningEGTEAAverage Recall66.24CDB-loss (3D- ResNeXt101)
Generalized Few-Shot LearningCIFAR-100-LT (ρ=10)Error Rate41.26CDB-loss
Generalized Few-Shot LearningImageNet-LTTop-1 Accuracy38.5CDB-loss (ResNet 10)
Generalized Few-Shot LearningCIFAR-100-LT (ρ=100)Error Rate57.43CDB-loss
Generalized Few-Shot LearningEGTEAAverage Precision63.86CDB-loss (3D- ResNeXt101)
Generalized Few-Shot LearningEGTEAAverage Recall66.24CDB-loss (3D- ResNeXt101)

Related Papers

Mitigating Spurious Correlations with Causal Logit Perturbation2025-05-21LIFT+: Lightweight Fine-Tuning for Long-Tail Learning2025-04-17Improving Visual Prompt Tuning by Gaussian Neighborhood Minimization for Long-Tailed Visual Recognition2024-10-28Learning from Neighbors: Category Extrapolation for Long-Tail Learning2024-10-21Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition2024-10-08AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation2024-09-30Representation Norm Amplification for Out-of-Distribution Detection in Long-Tail Learning2024-08-20LTRL: Boosting Long-tail Recognition via Reflective Learning2024-07-17