TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Forward Compatible Few-Shot Class-Incremental Learning

Forward Compatible Few-Shot Class-Incremental Learning

Da-Wei Zhou, Fu-Yun Wang, Han-Jia Ye, Liang Ma, ShiLiang Pu, De-Chuan Zhan

2022-03-14CVPR 2022 1Few-Shot Class-Incremental LearningClass Incremental Learningclass-incremental learningIncremental Learning
PaperPDFCode(official)

Abstract

Novel classes frequently arise in our dynamically changing world, e.g., new users in the authentication system, and a machine learning model should recognize new classes without forgetting old ones. This scenario becomes more challenging when new class instances are insufficient, which is called few-shot class-incremental learning (FSCIL). Current methods handle incremental learning retrospectively by making the updated model similar to the old one. By contrast, we suggest learning prospectively to prepare for future updates, and propose ForwArd Compatible Training (FACT) for FSCIL. Forward compatibility requires future new classes to be easily incorporated into the current model based on the current stage data, and we seek to realize it by reserving embedding space for future new classes. In detail, we assign virtual prototypes to squeeze the embedding of known classes and reserve for new ones. Besides, we forecast possible new classes and prepare for the updating process. The virtual prototypes allow the model to accept possible updates in the future, which act as proxies scattered among embedding space to build a stronger classifier during inference. FACT efficiently incorporates new classes with forward compatibility and meanwhile resists forgetting of old ones. Extensive experiments validate FACT's state-of-the-art performance. Code is available at: https://github.com/zhoudw-zdw/CVPR22-Fact

Results

TaskDatasetMetricValueModel
Continual LearningCIFAR-100Average Accuracy62.24FACT
Continual LearningCIFAR-100Last Accuracy52.1FACT
Continual Learningmini-ImagenetAverage Accuracy60.7FACT
Continual Learningmini-ImagenetLast Accuracy 50.49FACT
Class Incremental LearningCIFAR-100Average Accuracy62.24FACT
Class Incremental LearningCIFAR-100Last Accuracy52.1FACT
Class Incremental Learningmini-ImagenetAverage Accuracy60.7FACT
Class Incremental Learningmini-ImagenetLast Accuracy 50.49FACT

Related Papers

The Bayesian Approach to Continual Learning: An Overview2025-07-11Balancing the Past and Present: A Coordinated Replay Framework for Federated Class-Incremental Learning2025-07-10Addressing Imbalanced Domain-Incremental Learning through Dual-Balance Collaborative Experts2025-07-09DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic2025-06-26Class-Incremental Learning for Honey Botanical Origin Classification with Hyperspectral Images: A Study with Continual Backpropagation2025-06-12Hyperbolic Dual Feature Augmentation for Open-Environment2025-06-10L3A: Label-Augmented Analytic Adaptation for Multi-Label Class Incremental Learning2025-06-01CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning2025-05-30