TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Clusterability as an Alternative to Anchor Points When Lea...

Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels

Zhaowei Zhu, Yiwen Song, Yang Liu

2021-02-10Image ClassificationImage Classification with Label NoiseImage Classification with Human NoiseLearning with noisy labels
PaperPDFCode(official)Code(official)

Abstract

The label noise transition matrix, characterizing the probabilities of a training instance being wrongly annotated, is crucial to designing popular solutions to learning with noisy labels. Existing works heavily rely on finding "anchor points" or their approximates, defined as instances belonging to a particular class almost surely. Nonetheless, finding anchor points remains a non-trivial task, and the estimation accuracy is also often throttled by the number of available anchor points. In this paper, we propose an alternative option to the above task. Our main contribution is the discovery of an efficient estimation procedure based on a clusterability condition. We prove that with clusterable representations of features, using up to third-order consensuses of noisy labels among neighbor representations is sufficient to estimate a unique transition matrix. Compared with methods using anchor points, our approach uses substantially more instances and benefits from a much better sample complexity. We demonstrate the estimation accuracy and advantages of our estimates using both synthetic noisy labels (on CIFAR-10/100) and real human-level noisy labels (on Clothing1M and our self-collected human-annotated CIFAR-10). Our code and human-level noisy CIFAR-10 labels are available at https://github.com/UCSC-REAL/HOC.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10N-Random2Accuracy (mean)90.75CAL
Image ClassificationCIFAR-10N-Random3Accuracy (mean)90.74CAL
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)91.97CAL
Image ClassificationCIFAR-10N-Random1Accuracy (mean)90.93CAL
Image ClassificationCIFAR-100NAccuracy (mean)61.73CAL
Image ClassificationCIFAR-10N-WorstAccuracy (mean)85.36CAL
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)90.75CAL
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)90.74CAL
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)91.97CAL
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)90.93CAL
Document Text ClassificationCIFAR-100NAccuracy (mean)61.73CAL
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)85.36CAL

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks2025-07-14