TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Provably End-to-end Label-Noise Learning without Anchor Po...

Provably End-to-end Label-Noise Learning without Anchor Points

Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama

2021-02-04Learning with noisy labels
PaperPDFCode(official)

Abstract

In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers. Existing consistent estimators for the transition matrix have been developed by exploiting anchor points. However, the anchor-point assumption is not always satisfied in real scenarios. In this paper, we propose an end-to-end framework for solving label-noise learning without anchor points, in which we simultaneously optimize two objectives: the cross entropy loss between the noisy label and the predicted probability by the neural network, and the volume of the simplex formed by the columns of the transition matrix. Our proposed framework can identify the transition matrix if the clean class-posterior probabilities are sufficiently scattered. This is by far the mildest assumption under which the transition matrix is provably identifiable and the learned classifier is statistically consistent. Experimental results on benchmark datasets demonstrate the effectiveness and robustness of the proposed method.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10N-Random2Accuracy (mean)88.27VolMinNet
Image ClassificationCIFAR-10N-Random3Accuracy (mean)88.19VolMinNet
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)89.7VolMinNet
Image ClassificationCIFAR-10N-Random1Accuracy (mean)88.3VolMinNet
Image ClassificationCIFAR-100NAccuracy (mean)57.8VolMinNet
Image ClassificationCIFAR-10N-WorstAccuracy (mean)80.53VolMinNet
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)88.27VolMinNet
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)88.19VolMinNet
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)89.7VolMinNet
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)88.3VolMinNet
Document Text ClassificationCIFAR-100NAccuracy (mean)57.8VolMinNet
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)80.53VolMinNet

Related Papers

CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Recalling The Forgotten Class Memberships: Unlearned Models Can Be Noisy Labelers to Leak Privacy2025-06-24On the Role of Label Noise in the Feature Learning Process2025-05-25Detect and Correct: A Selective Noise Correction Method for Learning with Noisy Labels2025-05-19Exploring Video-Based Driver Activity Recognition under Noisy Labels2025-04-16Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization2025-04-03Learning from Noisy Labels with Contrastive Co-Transformer2025-03-04Enhancing Sample Selection Against Label Noise by Cutting Mislabeled Easy Examples2025-02-12