TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Few-shot Tuning of Foundation Models for Class-incremental...

Few-shot Tuning of Foundation Models for Class-incremental Learning

Shuvendu Roy, Elham Dolatabadi, Arash Afkanpour, Ali Etemad

2024-05-26Continual LearningFew-Shot Class-Incremental LearningClass Incremental Learningclass-incremental learningIncremental Learning
PaperPDFCode(official)

Abstract

For the first time, we explore few-shot tuning of vision foundation models for class-incremental learning. Unlike existing few-shot class incremental learning (FSCIL) methods, which train an encoder on a base session to ensure forward compatibility for future continual learning, foundation models are generally trained on large unlabelled data without such considerations. This renders prior methods from traditional FSCIL incompatible for FSCIL with the foundation model. To this end, we propose Consistency-guided Asynchronous Contrastive Tuning (CoACT), a new approach to continually tune foundation models for new classes in few-shot settings. CoACT comprises three components: (i) asynchronous contrastive tuning, which learns new classes by including LoRA modules in the pre-trained encoder, while enforcing consistency between two asynchronous encoders; (ii) controlled fine-tuning, which facilitates effective tuning of a subset of the foundation model; and (iii) consistency-guided incremental tuning, which enforces additional regularization during later sessions to reduce forgetting of the learned classes. We perform an extensive study on 16 diverse datasets and demonstrate the effectiveness of CoACT, outperforming the best baseline method by 2.47% on average and with up to 12.52% on individual datasets. Additionally, CoACT shows reduced forgetting and robustness in low-shot experiments. As an added bonus, CoACT shows up to 13.5% improvement in standard FSCIL over the current SOTA on benchmark evaluations. We make our code publicly available at https://github.com/ShuvenduRoy/CoACT-FSCIL.

Results

TaskDatasetMetricValueModel
Continual Learning CUB-200-2011Last Accuracy 81.19CoACT
Continual LearningCIFAR-100Last Accuracy84.63CoACT
Continual Learningmini-ImagenetLast Accuracy 96.24CoACT
Class Incremental Learning CUB-200-2011Last Accuracy 81.19CoACT
Class Incremental LearningCIFAR-100Last Accuracy84.63CoACT
Class Incremental Learningmini-ImagenetLast Accuracy 96.24CoACT

Related Papers

RegCL: Continual Adaptation of Segment Anything Model via Model Merging2025-07-16Information-Theoretic Generalization Bounds of Replay-based Continual Learning2025-07-16PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning2025-07-16Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime2025-07-15A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning2025-07-15LifelongPR: Lifelong knowledge fusion for point cloud place recognition based on replay and prompt learning2025-07-14Overcoming catastrophic forgetting in neural networks2025-07-14Continual Reinforcement Learning by Planning with Online World Models2025-07-12