TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Maintaining Discrimination and Fairness in Class Increment...

Maintaining Discrimination and Fairness in Class Incremental Learning

Bowen Zhao, Xi Xiao, Guojun Gan, Bin Zhang, Shu-Tao Xia

2019-11-16CVPR 2020 6FairnessClass Incremental Learningclass-incremental learningIncremental LearningKnowledge Distillation
PaperPDFCodeCode

Abstract

Deep neural networks (DNNs) have been applied in class incremental learning, which aims to solve common real-world problems of learning new classes continually. One drawback of standard DNNs is that they are prone to catastrophic forgetting. Knowledge distillation (KD) is a commonly used technique to alleviate this problem. In this paper, we demonstrate it can indeed help the model to output more discriminative results within old classes. However, it cannot alleviate the problem that the model tends to classify objects into new classes, causing the positive effect of KD to be hidden and limited. We observed that an important factor causing catastrophic forgetting is that the weights in the last fully connected (FC) layer are highly biased in class incremental learning. In this paper, we propose a simple and effective solution motivated by the aforementioned observations to address catastrophic forgetting. Firstly, we utilize KD to maintain the discrimination within old classes. Then, to further maintain the fairness between old classes and new classes, we propose Weight Aligning (WA) that corrects the biased weights in the FC layer after normal training process. Unlike previous work, WA does not require any extra parameters or a validation set in advance, as it utilizes the information provided by the biased weights themselves. The proposed method is evaluated on ImageNet-1000, ImageNet-100, and CIFAR-100 under various settings. Experimental results show that the proposed method can effectively alleviate catastrophic forgetting and significantly outperform state-of-the-art methods.

Results

TaskDatasetMetricValueModel
Incremental LearningCIFAR-100-B0(5steps of 20 classes)Average Incremental Accuracy72.81WA
Incremental LearningImageNet - 10 steps# M Params11.68WA
Incremental LearningImageNet - 10 stepsAverage Incremental Accuracy65.67WA
Incremental LearningImageNet - 10 stepsAverage Incremental Accuracy Top-586.6WA
Incremental LearningImageNet - 10 stepsFinal Accuracy55.6WA
Incremental LearningImageNet - 10 stepsFinal Accuracy Top-581.1WA
Incremental LearningImageNet100 - 10 steps# M Params11.22WA
Incremental LearningImageNet100 - 10 stepsAverage Incremental Accuracy Top-591WA
Incremental LearningImageNet100 - 10 stepsFinal Accuracy Top-584.1WA

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21A Reproducibility Study of Product-side Fairness in Bundle Recommendation2025-07-18FedGA: A Fair Federated Learning Framework Based on the Gini Coefficient2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17Looking for Fairness in Recommender Systems2025-07-16FADE: Adversarial Concept Erasure in Flow Models2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Fairness-Aware Grouping for Continuous Sensitive Variables: Application for Debiasing Face Analysis with respect to Skin Tone2025-07-15