TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Few-Shot Incremental Learning with Continually Evolved Cla...

Few-Shot Incremental Learning with Continually Evolved Classifiers

Chi Zhang, Nan Song, Guosheng Lin, Yun Zheng, Pan Pan, Yinghui Xu

2021-04-07CVPR 2021 1Few-Shot Class-Incremental LearningClass Incremental Learningclass-incremental learningIncremental Learning
PaperPDFCode

Abstract

Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points, without forgetting knowledge of old classes. The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems. Moreover, as training data come in sequence in FSCIL, the learned classifier can only provide discriminative information in individual sessions, while FSCIL requires all classes to be involved for evaluation. In this paper, we address the FSCIL problem from two aspects. First, we adopt a simple but effective decoupled learning strategy of representations and classifiers that only the classifiers are updated in each incremental session, which avoids knowledge forgetting in the representations. By doing so, we demonstrate that a pre-trained backbone plus a non-parametric class mean classifier can beat state-of-the-art methods. Second, to make the classifiers learned on individual sessions applicable to all classes, we propose a Continually Evolved Classifier (CEC) that employs a graph model to propagate context information between classifiers for adaptation. To enable the learning of CEC, we design a pseudo incremental learning paradigm that episodically constructs a pseudo incremental learning task to optimize the graph parameters by sampling data from the base dataset. Experiments on three popular benchmark datasets, including CIFAR100, miniImageNet, and Caltech-USCD Birds-200-2011 (CUB200), show that our method significantly outperforms the baselines and sets new state-of-the-art results with remarkable advantages.

Results

TaskDatasetMetricValueModel
Continual LearningCIFAR-100Average Accuracy59.53CEC
Continual LearningCIFAR-100Last Accuracy49.14CEC
Continual Learningmini-ImagenetAverage Accuracy57.74CEC
Continual Learningmini-ImagenetLast Accuracy 47.63CEC
Class Incremental LearningCIFAR-100Average Accuracy59.53CEC
Class Incremental LearningCIFAR-100Last Accuracy49.14CEC
Class Incremental Learningmini-ImagenetAverage Accuracy57.74CEC
Class Incremental Learningmini-ImagenetLast Accuracy 47.63CEC

Related Papers

The Bayesian Approach to Continual Learning: An Overview2025-07-11Balancing the Past and Present: A Coordinated Replay Framework for Federated Class-Incremental Learning2025-07-10Addressing Imbalanced Domain-Incremental Learning through Dual-Balance Collaborative Experts2025-07-09DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic2025-06-26Class-Incremental Learning for Honey Botanical Origin Classification with Hyperspectral Images: A Study with Continual Backpropagation2025-06-12Hyperbolic Dual Feature Augmentation for Open-Environment2025-06-10L3A: Label-Augmented Analytic Adaptation for Multi-Label Class Incremental Learning2025-06-01CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning2025-05-30