TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Making Deep Neural Networks Robust to Label Noise: a Loss ...

Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach

Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, Lizhen Qu

2016-09-13CVPR 2017 7Image ClassificationLearning with noisy labelsNoise Estimation
PaperPDFCodeCode(official)

Abstract

We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures --- stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers --- demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.

Results

TaskDatasetMetricValueModel
Image ClassificationClothing1M (using clean data)Accuracy80.27Forward
Image Classificationmini WebVision 1.0ImageNet Top-1 Accuracy57.36F-Correction (Inception-ResNet-v2)
Image Classificationmini WebVision 1.0ImageNet Top-5 Accuracy82.36F-Correction (Inception-ResNet-v2)
Image Classificationmini WebVision 1.0Top-1 Accuracy61.12F-Correction (Inception-ResNet-v2)
Image Classificationmini WebVision 1.0Top-5 Accuracy82.68F-Correction (Inception-ResNet-v2)
Image ClassificationCIFAR-10N-Random2Accuracy (mean)86.28Backward-T
Image ClassificationCIFAR-10N-Random2Accuracy (mean)86.14Forward-T
Image ClassificationCIFAR-10N-Random3Accuracy (mean)87.04Forward-T
Image ClassificationCIFAR-10N-Random3Accuracy (mean)86.86Backward-T
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)88.24Forward-T
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)88.13Backward-T
Image ClassificationCIFAR-10N-Random1Accuracy (mean)87.14Backward-T
Image ClassificationCIFAR-10N-Random1Accuracy (mean)86.88Forward-T
Image ClassificationCIFAR-100NAccuracy (mean)57.14Backward-T
Image ClassificationCIFAR-100NAccuracy (mean)57.01Forward-T
Image ClassificationCIFAR-10N-WorstAccuracy (mean)79.79Forward-T
Image ClassificationCIFAR-10N-WorstAccuracy (mean)77.61Backward-T
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)86.28Backward-T
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)86.14Forward-T
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)87.04Forward-T
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)86.86Backward-T
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)88.24Forward-T
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)88.13Backward-T
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)87.14Backward-T
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)86.88Forward-T
Document Text ClassificationCIFAR-100NAccuracy (mean)57.14Backward-T
Document Text ClassificationCIFAR-100NAccuracy (mean)57.01Forward-T
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)79.79Forward-T
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)77.61Backward-T

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks2025-07-14