TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Few-Shot Class-Incremental Learning by Sampling Multi-Phas...

Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks

Da-Wei Zhou, Han-Jia Ye, Liang Ma, Di Xie, ShiLiang Pu, De-Chuan Zhan

2022-03-31Meta-LearningFew-Shot Class-Incremental LearningClass Incremental Learningclass-incremental learningIncremental Learning
PaperPDFCode(official)

Abstract

New classes arise frequently in our ever-changing world, e.g., emerging topics in social media and new types of products in e-commerce. A model should recognize new classes and meanwhile maintain discriminability over old classes. Under severe circumstances, only limited novel instances are available to incrementally update the model. The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL). In this work, we propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT), which synthesizes fake FSCIL tasks from the base dataset. The data format of fake tasks is consistent with the `real' incremental tasks, and we can build a generalizable feature space for the unseen tasks through meta-learning. Besides, LIMIT also constructs a calibration module based on transformer, which calibrates the old class classifiers and new class prototypes into the same scale and fills in the semantic gap. The calibration module also adaptively contextualizes the instance-specific embedding with a set-to-set function. LIMIT efficiently adapts to new classes and meanwhile resists forgetting over old classes. Experiments on three benchmark datasets (CIFAR100, miniImageNet, and CUB200) and large-scale dataset, i.e., ImageNet ILSVRC2012 validate that LIMIT achieves state-of-the-art performance.

Results

TaskDatasetMetricValueModel
Continual LearningCIFAR-100Average Accuracy61.85LIMIT
Continual LearningCIFAR-100Last Accuracy51.23LIMIT
Continual Learningmini-ImagenetAverage Accuracy59.06LIMIT
Continual Learningmini-ImagenetLast Accuracy 49.19LIMIT
Class Incremental LearningCIFAR-100Average Accuracy61.85LIMIT
Class Incremental LearningCIFAR-100Last Accuracy51.23LIMIT
Class Incremental Learningmini-ImagenetAverage Accuracy59.06LIMIT
Class Incremental Learningmini-ImagenetLast Accuracy 49.19LIMIT

Related Papers

Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Imbalanced Regression Pipeline Recommendation2025-07-16CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Mixture of Experts in Large Language Models2025-07-15Iceberg: Enhancing HLS Modeling with Synthetic Data2025-07-14Meta-Reinforcement Learning for Fast and Data-Efficient Spectrum Allocation in Dynamic Wireless Networks2025-07-13Geo-ORBIT: A Federated Digital Twin Framework for Scene-Adaptive Lane Geometry Detection2025-07-11The Bayesian Approach to Continual Learning: An Overview2025-07-11