TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Co-teaching: Robust Training of Deep Neural Networks with ...

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama

2018-04-18NeurIPS 2018 12Image ClassificationLearning with noisy labelsMemorization
PaperPDFCodeCodeCodeCode(official)Code

Abstract

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training. Nonetheless, recent studies on the memorization effects of deep neural networks show that they would first memorize training data of clean labels and then those of noisy labels. Therefore in this paper, we propose a new deep learning paradigm called Co-teaching for combating with noisy labels. Namely, we train two deep neural networks simultaneously, and let them teach each other given every mini-batch: firstly, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this mini-batch should be used for training; finally, each network back propagates the data selected by its peer network and updates itself. Empirical results on noisy versions of MNIST, CIFAR-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.

Results

TaskDatasetMetricValueModel
Image Classificationmini WebVision 1.0ImageNet Top-1 Accuracy61.48Co-teaching (Inception-ResNet-v2)
Image Classificationmini WebVision 1.0ImageNet Top-5 Accuracy84.7Co-teaching (Inception-ResNet-v2)
Image Classificationmini WebVision 1.0Top-1 Accuracy63.58Co-teaching (Inception-ResNet-v2)
Image Classificationmini WebVision 1.0Top-5 Accuracy85.2Co-teaching (Inception-ResNet-v2)
Image ClassificationCIFAR-10N-Random2Accuracy (mean)90.3Co-Teaching
Image ClassificationCIFAR-10N-Random3Accuracy (mean)90.15Co-Teaching
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)91.2Co-Teaching
Image ClassificationCIFAR-10N-Random1Accuracy (mean)90.33Co-Teaching
Image ClassificationCIFAR-100NAccuracy (mean)60.37Co-Teaching
Image ClassificationCIFAR-10N-WorstAccuracy (mean)83.83Co-Teaching
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)90.3Co-Teaching
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)90.15Co-Teaching
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)91.2Co-Teaching
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)90.33Co-Teaching
Document Text ClassificationCIFAR-100NAccuracy (mean)60.37Co-Teaching
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)83.83Co-Teaching

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15What Should LLMs Forget? Quantifying Personal Data in LLMs for Right-to-Be-Forgotten Requests2025-07-15