TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Piggyback: Adapting a Single Network to Multiple Tasks by ...

Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights

Arun Mallya, Dillon Davis, Svetlana Lazebnik

2018-01-19ECCV 2018 9Continual LearningQuantization
PaperPDFCode(official)

Abstract

This work presents a method for adapting a single, fixed deep neural network to multiple tasks without affecting performance on already learned tasks. By building upon ideas from network quantization and pruning, we learn binary masks that piggyback on an existing network, or are applied to unmodified weights of that network to provide good performance on a new task. These masks are learned in an end-to-end differentiable fashion, and incur a low overhead of 1 bit per network parameter, per task. Even though the underlying network is fixed, the ability to mask individual weights allows for the learning of a large number of filters. We show performance comparable to dedicated fine-tuned networks for a variety of classification tasks, including those with large domain shifts from the initial task (ImageNet), and a variety of network architectures. Unlike prior work, we do not suffer from catastrophic forgetting or competition between tasks, and our performance is agnostic to task ordering. Code available at https://github.com/arunmallya/piggyback.

Results

TaskDatasetMetricValueModel
Continual Learningvisual domain decathlon (10 tasks)Avg. Accuracy76.6Piggyback
Continual Learningvisual domain decathlon (10 tasks)decathlon discipline (Score)2838Piggyback
Continual LearningSketch (Fine-grained 6 Tasks)Accuracy79.91Piggyback
Continual LearningStanford Cars (Fine-grained 6 Tasks)Accuracy89.62Piggyback
Continual LearningCUBS (Fine-grained 6 Tasks)Accuracy80.5Piggyback
Continual LearningWikiart (Fine-grained 6 Tasks)Accuracy71.33Piggyback
Continual LearningImageNet (Fine-grained 6 Tasks)Accuracy76.16Piggyback
Continual LearningFlowers (Fine-grained 6 Tasks)Accuracy94.77Piggyback

Related Papers

Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC2025-07-18Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17Angle Estimation of a Single Source with Massive Uniform Circular Arrays2025-07-17RegCL: Continual Adaptation of Segment Anything Model via Model Merging2025-07-16Information-Theoretic Generalization Bounds of Replay-based Continual Learning2025-07-16PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning2025-07-16Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime2025-07-15