TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Robust Training under Label Noise by Over-parameterization

Robust Training under Label Noise by Over-parameterization

Sheng Liu, Zhihui Zhu, Qing Qu, Chong You

2022-02-28Learning with noisy labels
PaperPDFCode(official)

Abstract

Recently, over-parameterized deep networks, with increasingly more network parameters than training samples, have dominated the performances of modern machine learning. However, when the training data is corrupted, it has been well-known that over-parameterized networks tend to overfit and do not generalize. In this work, we propose a principled approach for robust training of over-parameterized deep networks in classification tasks where a proportion of training labels are corrupted. The main idea is yet very simple: label noise is sparse and incoherent with the network learned from clean data, so we model the noise and learn to separate it from the data. Specifically, we model the label noise via another sparse over-parameterization term, and exploit implicit algorithmic regularizations to recover and separate the underlying corruptions. Remarkably, when trained using such a simple method in practice, we demonstrate state-of-the-art test accuracy against label noise on a variety of real datasets. Furthermore, our experimental results are corroborated by theory on simplified linear models, showing that exact separation between sparse noise and low-rank data can be achieved under incoherent conditions. The work opens many interesting directions for improving over-parameterized models by using sparse over-parameterization and implicit regularization.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10N-Random2Accuracy (mean)95.31SOP
Image ClassificationCIFAR-10N-Random3Accuracy (mean)95.39SOP+
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)95.61SOP+
Image ClassificationCIFAR-10N-Random1Accuracy (mean)95.28SOP+
Image ClassificationCIFAR-100NAccuracy (mean)67.81SOP+
Image ClassificationCIFAR-10N-WorstAccuracy (mean)93.24SOP+
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)95.31SOP
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)95.39SOP+
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)95.61SOP+
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)95.28SOP+
Document Text ClassificationCIFAR-100NAccuracy (mean)67.81SOP+
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)93.24SOP+

Related Papers

CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Recalling The Forgotten Class Memberships: Unlearned Models Can Be Noisy Labelers to Leak Privacy2025-06-24On the Role of Label Noise in the Feature Learning Process2025-05-25Detect and Correct: A Selective Noise Correction Method for Learning with Noisy Labels2025-05-19Exploring Video-Based Driver Activity Recognition under Noisy Labels2025-04-16Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization2025-04-03Learning from Noisy Labels with Contrastive Co-Transformer2025-03-04Enhancing Sample Selection Against Label Noise by Cutting Mislabeled Easy Examples2025-02-12