TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Resolving Task Confusion in Dynamic Expansion Architecture...

Resolving Task Confusion in Dynamic Expansion Architectures for Class Incremental Learning

Bingchen Huang, Zhineng Chen, Peng Zhou, Jiayin Chen, Zuxuan Wu

2022-12-29Class Incremental Learningclass-incremental learningIncremental LearningKnowledge Distillation
PaperPDFCode(official)

Abstract

The dynamic expansion architecture is becoming popular in class incremental learning, mainly due to its advantages in alleviating catastrophic forgetting. However, task confusion is not well assessed within this framework, e.g., the discrepancy between classes of different tasks is not well learned (i.e., inter-task confusion, ITC), and certain priority is still given to the latest class batch (i.e., old-new confusion, ONC). We empirically validate the side effects of the two types of confusion. Meanwhile, a novel solution called Task Correlated Incremental Learning (TCIL) is proposed to encourage discriminative and fair feature utilization across tasks. TCIL performs a multi-level knowledge distillation to propagate knowledge learned from old tasks to the new one. It establishes information flow paths at both feature and logit levels, enabling the learning to be aware of old classes. Besides, attention mechanism and classifier re-scoring are applied to generate more fair classification scores. We conduct extensive experiments on CIFAR100 and ImageNet100 datasets. The results demonstrate that TCIL consistently achieves state-of-the-art accuracy. It mitigates both ITC and ONC, while showing advantages in battle with catastrophic forgetting even no rehearsal memory is reserved.

Results

TaskDatasetMetricValueModel
Incremental LearningCIFAR-100 - 50 classes + 10 steps of 5 classesAverage Incremental Accuracy73.72TCIL
Incremental LearningCIFAR-100 - 50 classes + 10 steps of 5 classesAverage Incremental Accuracy73.5TCIL-Lite
Incremental LearningCIFAR-100 - 50 classes + 5 steps of 10 classesAverage Incremental Accuracy74.88TCIL
Incremental LearningCIFAR-100 - 50 classes + 5 steps of 10 classesAverage Incremental Accuracy74.3TCIL-Lite
Incremental LearningCIFAR100-B0(10steps of 10 classes)Average Incremental Accuracy77.3TCIL
Incremental LearningCIFAR100-B0(10steps of 10 classes)Average Incremental Accuracy76.74TCIL-Lite
Incremental LearningCIFAR-100-B0(5steps of 20 classes)Average Incremental Accuracy77.72TCIL
Incremental LearningCIFAR-100-B0(5steps of 20 classes)Average Incremental Accuracy76.96TCIL-Lite
Incremental LearningImageNet100 - 10 steps# M Params116.54TCIL
Incremental LearningImageNet100 - 10 stepsAverage Incremental Accuracy77.66TCIL
Incremental LearningImageNet100 - 10 stepsAverage Incremental Accuracy Top-594.17TCIL
Incremental LearningImageNet100 - 10 stepsFinal Accuracy67.34TCIL
Incremental LearningImageNet100 - 10 stepsFinal Accuracy Top-588.84TCIL
Incremental LearningImageNet100 - 10 steps# M Params26.36TCIL-Lite
Incremental LearningImageNet100 - 10 stepsAverage Incremental Accuracy77.5TCIL-Lite
Incremental LearningImageNet100 - 10 stepsAverage Incremental Accuracy Top-593.6TCIL-Lite
Incremental LearningImageNet100 - 10 stepsFinal Accuracy67.3TCIL-Lite
Incremental LearningImageNet100 - 10 stepsFinal Accuracy Top-587.94TCIL-Lite
Incremental LearningCIFAR-100 - 50 classes + 2 steps of 25 classesAverage Incremental Accuracy76.42TCIL
Incremental LearningCIFAR-100 - 50 classes + 2 steps of 25 classesAverage Incremental Accuracy74.95TCIL-Lite
Incremental LearningCIFAR100B020Step(5ClassesPerStep)Average Incremental Accuracy75.47TCIL-Lite
Incremental LearningCIFAR100B020Step(5ClassesPerStep)Average Incremental Accuracy75.11TCIL

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14The Bayesian Approach to Continual Learning: An Overview2025-07-11KAT-V1: Kwai-AutoThink Technical Report2025-07-11Towards Collaborative Fairness in Federated Learning Under Imbalanced Covariate Shift2025-07-11