TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/An Adaptive Random Path Selection Approach for Incremental...

An Adaptive Random Path Selection Approach for Incremental Learning

Jathushan Rajasegaran, Munawar Hayat, Salman Khan, Fahad Shahbaz Khan, Ling Shao, Ming-Hsuan Yang

2019-06-03Transfer LearningIncremental LearningKnowledge Distillation
PaperPDFCode(official)

Abstract

In a conventional supervised learning setting, a machine learning model has access to examples of all object classes that are desired to be recognized during the inference stage. This results in a fixed model that lacks the flexibility to adapt to new learning tasks. In practical settings, learning tasks often arrive in a sequence and the models must continually learn to increment their previously acquired knowledge. Existing incremental learning approaches fall well below the state-of-the-art cumulative models that use all training classes at once. In this paper, we propose a random path selection algorithm, called Adaptive RPS-Net, that progressively chooses optimal paths for the new tasks while encouraging parameter sharing between tasks. We introduce a new network capacity measure that enables us to automatically switch paths if the already used resources are saturated. Since the proposed path-reuse strategy ensures forward knowledge transfer, our approach is efficient and has considerably less computation overhead. As an added novelty, the proposed model integrates knowledge distillation and retrospection along with the path selection strategy to overcome catastrophic forgetting. In order to maintain an equilibrium between previous and newly acquired knowledge, we propose a simple controller to dynamically balance the model plasticity. Through extensive experiments, we demonstrate that the Adaptive RPS-Net method surpasses the state-of-the-art performance for incremental learning and by utilizing parallel computation this method can run in constant time with nearly the same efficiency as a conventional deep convolutional neural network.

Results

TaskDatasetMetricValueModel
Incremental LearningCIFAR-100-B0(5steps of 20 classes)Average Incremental Accuracy70.5RPSNet
Incremental LearningImageNet100 - 10 stepsAverage Incremental Accuracy Top-587.9RPSNet
Incremental LearningImageNet100 - 10 stepsFinal Accuracy Top-574RPSNet

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Robust-Multi-Task Gradient Boosting2025-07-15HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15