TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ACE: Ally Complementary Experts for Solving Long-Tailed Re...

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot

Jiarui Cai, Yizhou Wang, Jenq-Neng Hwang

2021-08-05ICCV 2021 10Long-tail Learning
PaperPDF

Abstract

One-stage long-tailed recognition methods improve the overall performance in a "seesaw" manner, i.e., either sacrifice the head's accuracy for better tail classification or elevate the head's accuracy even higher but ignore the tail. Existing algorithms bypass such trade-off by a multi-stage training process: pre-training on imbalanced set and fine-tuning on balanced set. Though achieving promising performance, not only are they sensitive to the generalizability of the pre-trained model, but also not easily integrated into other computer vision tasks like detection and segmentation, where pre-training of classifiers solely is not applicable. In this paper, we propose a one-stage long-tailed recognition scheme, ally complementary experts (ACE), where the expert is the most knowledgeable specialist in a sub-set that dominates its training, and is complementary to other experts in the less-seen categories without being disturbed by what it has never seen. We design a distribution-adaptive optimizer to adjust the learning pace of each expert to avoid over-fitting. Without special bells and whistles, the vanilla ACE outperforms the current one-stage SOTA method by 3-10% on CIFAR10-LT, CIFAR100-LT, ImageNet-LT and iNaturalist datasets. It is also shown to be the first one to break the "seesaw" trade-off by improving the accuracy of the majority and minority categories simultaneously in only one stage. Code and trained models are at https://github.com/jrcai/ACE.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-100-LT (ρ=100)Error Rate50.4ACE (4 experts)
Image ClassificationCIFAR-10-LT (ρ=100)Error Rate18.6ACE (4 experts)
Few-Shot Image ClassificationCIFAR-100-LT (ρ=100)Error Rate50.4ACE (4 experts)
Few-Shot Image ClassificationCIFAR-10-LT (ρ=100)Error Rate18.6ACE (4 experts)
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=100)Error Rate50.4ACE (4 experts)
Generalized Few-Shot ClassificationCIFAR-10-LT (ρ=100)Error Rate18.6ACE (4 experts)
Long-tail LearningCIFAR-100-LT (ρ=100)Error Rate50.4ACE (4 experts)
Long-tail LearningCIFAR-10-LT (ρ=100)Error Rate18.6ACE (4 experts)
Generalized Few-Shot LearningCIFAR-100-LT (ρ=100)Error Rate50.4ACE (4 experts)
Generalized Few-Shot LearningCIFAR-10-LT (ρ=100)Error Rate18.6ACE (4 experts)

Related Papers

Mitigating Spurious Correlations with Causal Logit Perturbation2025-05-21LIFT+: Lightweight Fine-Tuning for Long-Tail Learning2025-04-17Improving Visual Prompt Tuning by Gaussian Neighborhood Minimization for Long-Tailed Visual Recognition2024-10-28Learning from Neighbors: Category Extrapolation for Long-Tail Learning2024-10-21Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition2024-10-08AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation2024-09-30Representation Norm Amplification for Out-of-Distribution Detection in Long-Tail Learning2024-08-20LTRL: Boosting Long-tail Recognition via Reflective Learning2024-07-17