TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/D3Former: Debiased Dual Distilled Transformer for Incremen...

D3Former: Debiased Dual Distilled Transformer for Incremental Learning

Abdelrahman Mohamed, Rushali Grandhe, K J Joseph, Salman Khan, Fahad Khan

2022-07-25Continual Learningclass-incremental learningIncremental Learning
PaperPDFCode(official)

Abstract

In class incremental learning (CIL) setting, groups of classes are introduced to a model in each learning phase. The goal is to learn a unified model performant on all the classes observed so far. Given the recent popularity of Vision Transformers (ViTs) in conventional classification settings, an interesting question is to study their continual learning behaviour. In this work, we develop a Debiased Dual Distilled Transformer for CIL dubbed $\textrm{D}^3\textrm{Former}$. The proposed model leverages a hybrid nested ViT design to ensure data efficiency and scalability to small as well as large datasets. In contrast to a recent ViT based CIL approach, our $\textrm{D}^3\textrm{Former}$ does not dynamically expand its architecture when new tasks are learned and remains suitable for a large number of incremental tasks. The improved CIL behaviour of $\textrm{D}^3\textrm{Former}$ owes to two fundamental changes to the ViT design. First, we treat the incremental learning as a long-tail classification problem where the majority samples from new classes vastly outnumber the limited exemplars available for old classes. To avoid the bias against the minority old classes, we propose to dynamically adjust logits to emphasize on retaining the representations relevant to old tasks. Second, we propose to preserve the configuration of spatial attention maps as the learning progresses across tasks. This helps in reducing catastrophic forgetting by constraining the model to retain the attention on the most discriminative regions. $\textrm{D}^3\textrm{Former}$ obtains favorable results on incremental versions of CIFAR-100, MNIST, SVHN, and ImageNet datasets. Code is available at https://tinyurl.com/d3former

Results

TaskDatasetMetricValueModel
Incremental LearningCIFAR-100 - 50 classes + 10 steps of 5 classesAverage Incremental Accuracy70.94D3Former
Incremental LearningCIFAR-100 - 50 classes + 5 steps of 10 classesAverage Incremental Accuracy72.23D3Former
Incremental LearningCIFAR-100 - 50 classes + 25 steps of 2 classesAverage Incremental Accuracy68.68D3Former

Related Papers

RegCL: Continual Adaptation of Segment Anything Model via Model Merging2025-07-16Information-Theoretic Generalization Bounds of Replay-based Continual Learning2025-07-16PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning2025-07-16Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime2025-07-15A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning2025-07-15LifelongPR: Lifelong knowledge fusion for point cloud place recognition based on replay and prompt learning2025-07-14Overcoming catastrophic forgetting in neural networks2025-07-14Continual Reinforcement Learning by Planning with Online World Models2025-07-12