TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Pre-trained Vision and Language Transformers Are Few-Shot ...

Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners

Keon-Hee Park, Kyungwoo Song, Gyeong-Moon Park

2024-04-02CVPR 2024 1Few-Shot Class-Incremental LearningClass Incremental LearningTransfer Learningclass-incremental learningIncremental LearningKnowledge Distillation
PaperPDFCode(official)

Abstract

Few-Shot Class Incremental Learning (FSCIL) is a task that requires a model to learn new classes incrementally without forgetting when only a few samples for each class are given. FSCIL encounters two significant challenges: catastrophic forgetting and overfitting, and these challenges have driven prior studies to primarily rely on shallow models, such as ResNet-18. Even though their limited capacity can mitigate both forgetting and overfitting issues, it leads to inadequate knowledge transfer during few-shot incremental sessions. In this paper, we argue that large models such as vision and language transformers pre-trained on large datasets can be excellent few-shot incremental learners. To this end, we propose a novel FSCIL framework called PriViLege, Pre-trained Vision and Language transformers with prompting functions and knowledge distillation. Our framework effectively addresses the challenges of catastrophic forgetting and overfitting in large models through new pre-trained knowledge tuning (PKT) and two losses: entropy-based divergence loss and semantic knowledge distillation loss. Experimental results show that the proposed PriViLege significantly outperforms the existing state-of-the-art methods with a large margin, e.g., +9.38% in CUB200, +20.58% in CIFAR-100, and +13.36% in miniImageNet. Our implementation code is available at https://github.com/KHU-AGI/PriViLege.

Results

TaskDatasetMetricValueModel
Continual Learning CUB-200-2011Average Accuracy79.2PriViLege (ViT-L)
Continual Learning CUB-200-2011Last Accuracy 76.43PriViLege (ViT-L)
Continual Learning CUB-200-2011Average Accuracy77.5PriViLege
Continual Learning CUB-200-2011Last Accuracy 75.08PriViLege
Continual LearningCIFAR-100Average Accuracy88.08PriViLege
Continual LearningCIFAR-100Last Accuracy86.06PriViLege
Continual Learningmini-ImagenetAverage Accuracy95.27PriViLege
Continual Learningmini-ImagenetLast Accuracy 94.1PriViLege
Class Incremental Learning CUB-200-2011Average Accuracy79.2PriViLege (ViT-L)
Class Incremental Learning CUB-200-2011Last Accuracy 76.43PriViLege (ViT-L)
Class Incremental Learning CUB-200-2011Average Accuracy77.5PriViLege
Class Incremental Learning CUB-200-2011Last Accuracy 75.08PriViLege
Class Incremental LearningCIFAR-100Average Accuracy88.08PriViLege
Class Incremental LearningCIFAR-100Last Accuracy86.06PriViLege
Class Incremental Learningmini-ImagenetAverage Accuracy95.27PriViLege
Class Incremental Learningmini-ImagenetLast Accuracy 94.1PriViLege

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Robust-Multi-Task Gradient Boosting2025-07-15HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15