TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Does label smoothing mitigate label noise?

Does label smoothing mitigate label noise?

Michal Lukasik, Srinadh Bhojanapalli, Aditya Krishna Menon, Sanjiv Kumar

2020-03-05ICML 2020 1Learning with noisy labels
PaperPDF

Abstract

Label smoothing is commonly used in training deep learning models, wherein one-hot training labels are mixed with uniform label vectors. Empirically, smoothing has been shown to improve both predictive performance and model calibration. In this paper, we study whether label smoothing is also effective as a means of coping with label noise. While label smoothing apparently amplifies this problem --- being equivalent to injecting symmetric noise to the labels --- we show how it relates to a general family of loss-correction techniques from the label noise literature. Building on this connection, we show that label smoothing is competitive with loss-correction under label noise. Further, we show that when distilling models from noisy data, label smoothing of the teacher is beneficial; this is in contrast to recent findings for noise-free problems, and sheds further light on settings where label smoothing is beneficial.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10N-Random2Accuracy (mean)89.35Positive-LS
Image ClassificationCIFAR-10N-Random3Accuracy (mean)89.82Positive-LS
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)91.57Positive-LS
Image ClassificationCIFAR-10N-Random1Accuracy (mean)89.8Positive-LS
Image ClassificationCIFAR-100NAccuracy (mean)55.84Positive-LS
Image ClassificationCIFAR-10N-WorstAccuracy (mean)82.76Positive-LS
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)89.35Positive-LS
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)89.82Positive-LS
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)91.57Positive-LS
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)89.8Positive-LS
Document Text ClassificationCIFAR-100NAccuracy (mean)55.84Positive-LS
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)82.76Positive-LS

Related Papers

CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Recalling The Forgotten Class Memberships: Unlearned Models Can Be Noisy Labelers to Leak Privacy2025-06-24On the Role of Label Noise in the Feature Learning Process2025-05-25Detect and Correct: A Selective Noise Correction Method for Learning with Noisy Labels2025-05-19Exploring Video-Based Driver Activity Recognition under Noisy Labels2025-04-16Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization2025-04-03Learning from Noisy Labels with Contrastive Co-Transformer2025-03-04Enhancing Sample Selection Against Label Noise by Cutting Mislabeled Easy Examples2025-02-12