TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/PackNet: Adding Multiple Tasks to a Single Network by Iter...

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

Arun Mallya, Svetlana Lazebnik

2017-11-15CVPR 2018 6Continual LearningNetwork Pruning
PaperPDFCodeCode(official)CodeCode

Abstract

This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially "pack" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and large-scale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task. Code available at https://github.com/arunmallya/packnet

Results

TaskDatasetMetricValueModel
Continual LearningSketch (Fine-grained 6 Tasks)Accuracy76.17PackNet
Continual LearningStanford Cars (Fine-grained 6 Tasks)Accuracy86.11PackNet
Continual LearningCUBS (Fine-grained 6 Tasks)Accuracy80.41PackNet
Continual LearningWikiart (Fine-grained 6 Tasks)Accuracy69.4PackNet
Continual LearningCifar100 (20 tasks)Average Accuracy67.5PackNet
Continual LearningImageNet (Fine-grained 6 Tasks)Accuracy75.71PackNet
Continual LearningFlowers (Fine-grained 6 Tasks)Accuracy93.04PackNet

Related Papers

RegCL: Continual Adaptation of Segment Anything Model via Model Merging2025-07-16Information-Theoretic Generalization Bounds of Replay-based Continual Learning2025-07-16PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning2025-07-16Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime2025-07-15A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning2025-07-15LifelongPR: Lifelong knowledge fusion for point cloud place recognition based on replay and prompt learning2025-07-14Overcoming catastrophic forgetting in neural networks2025-07-14Continual Reinforcement Learning by Planning with Online World Models2025-07-12