TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/NAPA-VQ: Neighborhood Aware Prototype Augmentation with Ve...

NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual Learning

Tamasha Malepathirana, Damith Senanayake, Saman Halgamuge

2023-08-18Continual LearningQuantizationClass Incremental LearningNon-exemplar-based Class Incremental Learningclass-incremental learningIncremental Learning
PaperPDFCode(official)

Abstract

Catastrophic forgetting; the loss of old knowledge upon acquiring new knowledge, is a pitfall faced by deep neural networks in real-world applications. Many prevailing solutions to this problem rely on storing exemplars (previously encountered data), which may not be feasible in applications with memory limitations or privacy constraints. Therefore, the recent focus has been on Non-Exemplar based Class Incremental Learning (NECIL) where a model incrementally learns about new classes without using any past exemplars. However, due to the lack of old data, NECIL methods struggle to discriminate between old and new classes causing their feature representations to overlap. We propose NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization, a framework that reduces this class overlap in NECIL. We draw inspiration from Neural Gas to learn the topological relationships in the feature space, identifying the neighboring classes that are most likely to get confused with each other. This neighborhood information is utilized to enforce strong separation between the neighboring classes as well as to generate old class representative prototypes that can better aid in obtaining a discriminative decision boundary between old and new classes. Our comprehensive experiments on CIFAR-100, TinyImageNet, and ImageNet-Subset demonstrate that NAPA-VQ outperforms the State-of-the-art NECIL methods by an average improvement of 5%, 2%, and 4% in accuracy and 10%, 3%, and 9% in forgetting respectively. Our code can be found in https://github.com/TamashaM/NAPA-VQ.git.

Results

TaskDatasetMetricValueModel
Continual LearningImageNetSubsetAverage accuracy - 5 tasks69.15NAPA-VQ
Continual LearningImageNetSubsetaverage accuracy - 10 tasks68.83NAPA-VQ
Continual LearningImageNetSubsetaverage accuracy - 20 tasks63.09NAPA-VQ
Continual Learningcifar100Average accuracy - 5 tasks70.44NAPA-VQ
Continual Learningcifar100average accuracy - 10 tasks69.04NAPA-VQ
Continual Learningcifar100average accuracy - 20 tasks67.42NAPA-VQ
Continual LearningTinyImageNetAverage accuracy - 5 tasks52.77NAPA-VQ
Continual LearningTinyImageNetaverage accuracy - 10 tasks51.78NAPA-VQ
Continual LearningTinyImageNetaverage accuracy - 20 tasks49.51NAPA-VQ
Class Incremental LearningImageNetSubsetAverage accuracy - 5 tasks69.15NAPA-VQ
Class Incremental LearningImageNetSubsetaverage accuracy - 10 tasks68.83NAPA-VQ
Class Incremental LearningImageNetSubsetaverage accuracy - 20 tasks63.09NAPA-VQ
Class Incremental Learningcifar100Average accuracy - 5 tasks70.44NAPA-VQ
Class Incremental Learningcifar100average accuracy - 10 tasks69.04NAPA-VQ
Class Incremental Learningcifar100average accuracy - 20 tasks67.42NAPA-VQ
Class Incremental LearningTinyImageNetAverage accuracy - 5 tasks52.77NAPA-VQ
Class Incremental LearningTinyImageNetaverage accuracy - 10 tasks51.78NAPA-VQ
Class Incremental LearningTinyImageNetaverage accuracy - 20 tasks49.51NAPA-VQ

Related Papers

Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC2025-07-18Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17Angle Estimation of a Single Source with Massive Uniform Circular Arrays2025-07-17RegCL: Continual Adaptation of Segment Anything Model via Model Merging2025-07-16Information-Theoretic Generalization Bounds of Replay-based Continual Learning2025-07-16PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning2025-07-16Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime2025-07-15