TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Long-tailed Visual Recognition via Gaussian Clouded Logit ...

Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment

Mengke Li, Yiu-ming Cheung, Yang Lu

2023-05-19CVPR 2022 1
PaperPDFCode(official)

Abstract

Long-tailed data is still a big challenge for deep neural networks, even though they have achieved great success on balanced data. We observe that vanilla training on long-tailed data with cross-entropy loss makes the instance-rich head classes severely squeeze the spatial distribution of the tail classes, which leads to difficulty in classifying tail class samples. Furthermore, the original cross-entropy loss can only propagate gradient short-lively because the gradient in softmax form rapidly approaches zero as the logit difference increases. This phenomenon is called softmax saturation. It is unfavorable for training on balanced data, but can be utilized to adjust the validity of the samples in long-tailed data, thereby solving the distorted embedding space of long-tailed problems. To this end, this paper proposes the Gaussian clouded logit adjustment by Gaussian perturbation of different class logits with varied amplitude. We define the amplitude of perturbation as cloud size and set relatively large cloud sizes to tail classes. The large cloud size can reduce the softmax saturation and thereby making tail class samples more active as well as enlarging the embedding space. To alleviate the bias in a classifier, we therefore propose the class-based effective number sampling strategy with classifier re-training. Extensive experiments on benchmark datasets validate the superior performance of the proposed method. Source code is available at https://github.com/Keke921/GCLLoss.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10-LT (ρ=10)Error Rate10.77GCLLoss
Image ClassificationCIFAR-100-LT (ρ=50)Error Rate46.4GCL
Image ClassificationCIFAR-100-LT (ρ=100)Error Rate51.29GCL
Image ClassificationCIFAR-10-LT (ρ=100)Error Rate17.32GCL
Few-Shot Image ClassificationCIFAR-10-LT (ρ=10)Error Rate10.77GCLLoss
Few-Shot Image ClassificationCIFAR-100-LT (ρ=50)Error Rate46.4GCL
Few-Shot Image ClassificationCIFAR-100-LT (ρ=100)Error Rate51.29GCL
Few-Shot Image ClassificationCIFAR-10-LT (ρ=100)Error Rate17.32GCL
Generalized Few-Shot ClassificationCIFAR-10-LT (ρ=10)Error Rate10.77GCLLoss
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=50)Error Rate46.4GCL
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=100)Error Rate51.29GCL
Generalized Few-Shot ClassificationCIFAR-10-LT (ρ=100)Error Rate17.32GCL
Long-tail LearningCIFAR-10-LT (ρ=10)Error Rate10.77GCLLoss
Long-tail LearningCIFAR-100-LT (ρ=50)Error Rate46.4GCL
Long-tail LearningCIFAR-100-LT (ρ=100)Error Rate51.29GCL
Long-tail LearningCIFAR-10-LT (ρ=100)Error Rate17.32GCL
Generalized Few-Shot LearningCIFAR-10-LT (ρ=10)Error Rate10.77GCLLoss
Generalized Few-Shot LearningCIFAR-100-LT (ρ=50)Error Rate46.4GCL
Generalized Few-Shot LearningCIFAR-100-LT (ρ=100)Error Rate51.29GCL
Generalized Few-Shot LearningCIFAR-10-LT (ρ=100)Error Rate17.32GCL