TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Compacting, Picking and Growing for Unforgetting Continual...

Compacting, Picking and Growing for Unforgetting Continual Learning

Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, Chu-Song Chen

2019-10-15NeurIPS 2019 12Continual LearningFace VerificationModel CompressionAge And Gender ClassificationFacial Expression Recognition (FER)Incremental Learning
PaperPDFCode(official)Code

Abstract

Continual lifelong learning is essential to many applications. In this paper, we propose a simple but effective approach to continual deep learning. Our approach leverages the principles of deep model compression, critical weights selection, and progressive networks expansion. By enforcing their integration in an iterative manner, we introduce an incremental learning method that is scalable to the number of sequential tasks in a continual learning process. Our approach is easy to implement and owns several favorable characteristics. First, it can avoid forgetting (i.e., learn new tasks while remembering all previous tasks). Second, it allows model expansion but can maintain the model compactness when handling sequential tasks. Besides, through our compaction and selection/expansion mechanism, we show that the knowledge accumulated through learning previous tasks is helpful to build a better model for the new tasks compared to training the models independently with tasks. Experimental results show that our approach can incrementally learn a deep model tackling multiple tasks without forgetting, while the model compactness is maintained with the performance more satisfiable than individual task training.

Results

TaskDatasetMetricValueModel
Facial Recognition and ModellingAffectNetAccuracy (7 emotion)63.57CPG
Facial Recognition and ModellingAdience GenderAccuracy (5-fold)89.66CPG (single crop, pytorch)
Facial Recognition and ModellingAdience AgeAccuracy (5-fold)57.66CPG (single crop, pytorch)
Face ReconstructionAffectNetAccuracy (7 emotion)63.57CPG
Face ReconstructionAdience GenderAccuracy (5-fold)89.66CPG (single crop, pytorch)
Face ReconstructionAdience AgeAccuracy (5-fold)57.66CPG (single crop, pytorch)
Facial Expression Recognition (FER)AffectNetAccuracy (7 emotion)63.57CPG
3DAffectNetAccuracy (7 emotion)63.57CPG
3DAdience GenderAccuracy (5-fold)89.66CPG (single crop, pytorch)
3DAdience AgeAccuracy (5-fold)57.66CPG (single crop, pytorch)
3D Face ModellingAffectNetAccuracy (7 emotion)63.57CPG
3D Face ModellingAdience GenderAccuracy (5-fold)89.66CPG (single crop, pytorch)
3D Face ModellingAdience AgeAccuracy (5-fold)57.66CPG (single crop, pytorch)
Continual LearningSketch (Fine-grained 6 Tasks)Accuracy80.33CPG
Continual LearningStanford Cars (Fine-grained 6 Tasks)Accuracy92.8CPG
Continual LearningCUBS (Fine-grained 6 Tasks)Accuracy83.59CPG
Continual LearningWikiart (Fine-grained 6 Tasks)Accuracy77.15CPG
Continual LearningCifar100 (20 tasks)Average Accuracy80.9CPG
Continual LearningImageNet (Fine-grained 6 Tasks)Accuracy75.81CPG
Continual LearningFlowers (Fine-grained 6 Tasks)Accuracy96.62CPG
3D Face ReconstructionAffectNetAccuracy (7 emotion)63.57CPG
3D Face ReconstructionAdience GenderAccuracy (5-fold)89.66CPG (single crop, pytorch)
3D Face ReconstructionAdience AgeAccuracy (5-fold)57.66CPG (single crop, pytorch)
Age And Gender ClassificationAdience GenderAccuracy (5-fold)89.66CPG (single crop, pytorch)
Age And Gender ClassificationAdience AgeAccuracy (5-fold)57.66CPG (single crop, pytorch)

Related Papers

ProxyFusion: Face Feature Aggregation Through Sparse Experts2025-09-24LINR-PCGC: Lossless Implicit Neural Representations for Point Cloud Geometry Compression2025-07-21DiffClean: Diffusion-based Makeup Removal for Accurate Age Estimation2025-07-17RegCL: Continual Adaptation of Segment Anything Model via Model Merging2025-07-16Information-Theoretic Generalization Bounds of Replay-based Continual Learning2025-07-16PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning2025-07-16Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime2025-07-15A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning2025-07-15