TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/CODE-CL: Conceptor-Based Gradient Projection for Deep Cont...

CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning

Marco Paul E. Apolinario, Sakshi Choudhary, Kaushik Roy

2024-11-21Continual LearningImage ClassificationTransfer Learning
PaperPDFCode(official)

Abstract

Continual learning (CL) - the ability to progressively acquire and integrate new concepts - is essential to intelligent systems to adapt to dynamic environments. However, deep neural networks struggle with catastrophic forgetting (CF) when learning tasks sequentially, as training for new tasks often overwrites previously learned knowledge. To address this, recent approaches constrain updates to orthogonal subspaces using gradient projection, effectively preserving important gradient directions for previous tasks. While effective in reducing forgetting, these approaches inadvertently hinder forward knowledge transfer (FWT), particularly when tasks are highly correlated. In this work, we propose Conceptor-based gradient projection for Deep Continual Learning (CODE-CL), a novel method that leverages conceptor matrix representations, a form of regularized reconstruction, to adaptively handle highly correlated tasks. CODE-CL mitigates CF by projecting gradients onto pseudo-orthogonal subspaces of previous task feature spaces while simultaneously promoting FWT. It achieves this by learning a linear combination of shared basis directions, allowing efficient balance between stability and plasticity and transfer of knowledge between overlapping input feature representations. Extensive experiments on continual learning benchmarks validate CODE-CL's efficacy, demonstrating superior performance, reduced forgetting, and improved FWT as compared to state-of-the-art methods.

Results

TaskDatasetMetricValueModel
Continual LearningPermuted MNISTAverage Accuracy96.56CODE-CL
Continual LearningPermuted MNISTBWT-0.24CODE-CL
Continual Learningsplit CIFAR-100Average Accuracy77.21CODE-CL
Continual Learningsplit CIFAR-100BWT-1.1CODE-CL
Continual LearningminiImagenetAverage Accuracy68.83CODE-CL
Continual LearningminiImagenetBWT-1.1CODE-CL
Continual Learning5-DatasetsAverage Accuracy93.32CODE-CL
Continual Learning5-DatasetsBWT-0.25CODE-CL

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17RegCL: Continual Adaptation of Segment Anything Model via Model Merging2025-07-16