TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Peer Loss Functions: Learning from Noisy Labels without Kn...

Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates

Yang Liu, Hongyi Guo

2019-10-08ICML 2020 1Learning with noisy labels
PaperPDFCodeCode

Abstract

Learning with noisy labels is a common challenge in supervised learning. Existing approaches often require practitioners to specify noise rates, i.e., a set of parameters controlling the severity of label noises in the problem, and the specifications are either assumed to be given or estimated using additional steps. In this work, we introduce a new family of loss functions that we name as peer loss functions, which enables learning from noisy labels and does not require a priori specification of the noise rates. Peer loss functions work within the standard empirical risk minimization (ERM) framework. We show that, under mild conditions, performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near-optimal classifier as if performing ERM over the clean training data, which we do not have access to. We pair our results with an extensive set of experiments. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10N-Random2Accuracy (mean)88.76Peer Loss
Image ClassificationCIFAR-10N-Random3Accuracy (mean)88.57Peer Loss
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)90.75Peer Loss
Image ClassificationCIFAR-10N-Random1Accuracy (mean)89.06Peer Loss
Image ClassificationCIFAR-100NAccuracy (mean)57.59Peer Loss
Image ClassificationCIFAR-10N-WorstAccuracy (mean)82.53Peer Loss
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)88.76Peer Loss
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)88.57Peer Loss
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)90.75Peer Loss
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)89.06Peer Loss
Document Text ClassificationCIFAR-100NAccuracy (mean)57.59Peer Loss
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)82.53Peer Loss

Related Papers

CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Recalling The Forgotten Class Memberships: Unlearned Models Can Be Noisy Labelers to Leak Privacy2025-06-24On the Role of Label Noise in the Feature Learning Process2025-05-25Detect and Correct: A Selective Noise Correction Method for Learning with Noisy Labels2025-05-19Exploring Video-Based Driver Activity Recognition under Noisy Labels2025-04-16Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization2025-04-03Learning from Noisy Labels with Contrastive Co-Transformer2025-03-04Enhancing Sample Selection Against Label Noise by Cutting Mislabeled Easy Examples2025-02-12