TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Inducing Neural Collapse in Imbalanced Learning: Do We Rea...

Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network?

Yibo Yang, Shixiang Chen, Xiangtai Li, Liang Xie, Zhouchen Lin, DaCheng Tao

2022-03-17Image ClassificationLong-tail LearningClassification
PaperPDFCode(official)

Abstract

Modern deep neural networks for classification usually jointly learn a backbone for representation and a linear classifier to output the logit of each class. A recent study has shown a phenomenon called neural collapse that the within-class means of features and the classifier vectors converge to the vertices of a simplex equiangular tight frame (ETF) at the terminal phase of training on a balanced dataset. Since the ETF geometric structure maximally separates the pair-wise angles of all classes in the classifier, it is natural to raise the question, why do we spend an effort to learn a classifier when we know its optimal geometric structure? In this paper, we study the potential of learning a neural network for classification with the classifier randomly initialized as an ETF and fixed during training. Our analytical work based on the layer-peeled model indicates that the feature learning with a fixed ETF classifier naturally leads to the neural collapse state even when the dataset is imbalanced among classes. We further show that in this case the cross entropy (CE) loss is not necessary and can be replaced by a simple squared loss that shares the same global optimality but enjoys a better convergence property. Our experimental results show that our method is able to bring significant improvements with faster convergence on multiple imbalanced datasets.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-100-LT (ρ=100)Error Rate54.7ETF Classifier + DR (Resnet)
Image ClassificationCIFAR-10-LT (ρ=100)Error Rate23.5ETF Classifier + DR (Resnet)
Few-Shot Image ClassificationCIFAR-100-LT (ρ=100)Error Rate54.7ETF Classifier + DR (Resnet)
Few-Shot Image ClassificationCIFAR-10-LT (ρ=100)Error Rate23.5ETF Classifier + DR (Resnet)
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=100)Error Rate54.7ETF Classifier + DR (Resnet)
Generalized Few-Shot ClassificationCIFAR-10-LT (ρ=100)Error Rate23.5ETF Classifier + DR (Resnet)
Long-tail LearningCIFAR-100-LT (ρ=100)Error Rate54.7ETF Classifier + DR (Resnet)
Long-tail LearningCIFAR-10-LT (ρ=100)Error Rate23.5ETF Classifier + DR (Resnet)
Generalized Few-Shot LearningCIFAR-100-LT (ρ=100)Error Rate54.7ETF Classifier + DR (Resnet)
Generalized Few-Shot LearningCIFAR-10-LT (ρ=100)Error Rate23.5ETF Classifier + DR (Resnet)

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15