TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Are Anchor Points Really Indispensable in Label-Noise Lear...

Are Anchor Points Really Indispensable in Label-Noise Learning?

Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, Masashi Sugiyama

2019-06-01NeurIPS 2019 12Learning with noisy labels
PaperPDFCode(official)

Abstract

In label-noise learning, \textit{noise transition matrix}, denoting the probabilities that clean labels flip into noisy labels, plays a central role in building \textit{statistically consistent classifiers}. Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i.e., data points that belong to a specific class almost surely). However, when there are no anchor points, the transition matrix will be poorly learned, and those current consistent classifiers will significantly degenerate. In this paper, without employing anchor points, we propose a \textit{transition-revision} ($T$-Revision) method to effectively learn transition matrices, leading to better classifiers. Specifically, to learn a transition matrix, we first initialize it by exploiting data points that are similar to anchor points, having high \textit{noisy class posterior probabilities}. Then, we modify the initialized matrix by adding a \textit{slack variable}, which can be learned and validated together with the classifier by using noisy data. Empirical results on benchmark-simulated and real-world label-noise datasets demonstrate that without using exact anchor points, the proposed method is superior to the state-of-the-art label-noise learning methods.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10N-Random2Accuracy (mean)87.71T-Revision
Image ClassificationCIFAR-10N-Random3Accuracy (mean)87.79T-Revision
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)88.52T-Revision
Image ClassificationCIFAR-10N-Random1Accuracy (mean)88.33T-Revision
Image ClassificationCIFAR-100NAccuracy (mean)51.55T-Revision
Image ClassificationCIFAR-10N-WorstAccuracy (mean)80.48T-Revision
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)87.71T-Revision
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)87.79T-Revision
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)88.52T-Revision
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)88.33T-Revision
Document Text ClassificationCIFAR-100NAccuracy (mean)51.55T-Revision
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)80.48T-Revision

Related Papers

CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Recalling The Forgotten Class Memberships: Unlearned Models Can Be Noisy Labelers to Leak Privacy2025-06-24On the Role of Label Noise in the Feature Learning Process2025-05-25Detect and Correct: A Selective Noise Correction Method for Learning with Noisy Labels2025-05-19Exploring Video-Based Driver Activity Recognition under Noisy Labels2025-04-16Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization2025-04-03Learning from Noisy Labels with Contrastive Co-Transformer2025-03-04Enhancing Sample Selection Against Label Noise by Cutting Mislabeled Easy Examples2025-02-12