TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/CLIPCleaner: Cleaning Noisy Labels with CLIP

CLIPCleaner: Cleaning Noisy Labels with CLIP

Chen Feng, Georgios Tzimiropoulos, Ioannis Patras

2024-08-19Learning with noisy labels
PaperPDFCode(official)

Abstract

Learning with Noisy labels (LNL) poses a significant challenge for the Machine Learning community. Some of the most widely used approaches that select as clean samples for which the model itself (the in-training model) has high confidence, e.g., `small loss', can suffer from the so called `self-confirmation' bias. This bias arises because the in-training model, is at least partially trained on the noisy labels. Furthermore, in the classification case, an additional challenge arises because some of the label noise is between classes that are visually very similar (`hard noise'). This paper addresses these challenges by proposing a method (\textit{CLIPCleaner}) that leverages CLIP, a powerful Vision-Language (VL) model for constructing a zero-shot classifier for efficient, offline, clean sample selection. This has the advantage that the sample selection is decoupled from the in-training model and that the sample selection is aware of the semantic and visual similarities between the classes due to the way that CLIP is trained. We provide theoretical justifications and empirical evidence to demonstrate the advantages of CLIP for LNL compared to conventional pre-trained models. Compared to current methods that combine iterative sample selection with various techniques, \textit{CLIPCleaner} offers a simple, single-step approach that achieves competitive or superior performance on benchmark datasets. To the best of our knowledge, this is the first time a VL model has been used for sample selection to address the problem of Learning with Noisy Labels (LNL), highlighting their potential in the domain.

Results

TaskDatasetMetricValueModel
Image ClassificationRed MiniImageNet 80% label noiseTest Accuracy43.82CLIPCleaner
Image ClassificationANIMALAccuracy88.85CLIPCleaner
Image ClassificationRed MiniImageNet 40% label noiseTest Accuracy58.42CLIPCleaner
Image ClassificationClothing1MTest Accuracy74.87CLIPCleaner
Image ClassificationRed MiniImageNet 60% label noiseTest Accuracy53.18CLIPCleaner
Image ClassificationRed MiniImageNet 20% label noiseTest Accuracy61.44CLIPCleaner
Document Text ClassificationRed MiniImageNet 80% label noiseTest Accuracy43.82CLIPCleaner
Document Text ClassificationANIMALAccuracy88.85CLIPCleaner
Document Text ClassificationRed MiniImageNet 40% label noiseTest Accuracy58.42CLIPCleaner
Document Text ClassificationClothing1MTest Accuracy74.87CLIPCleaner
Document Text ClassificationRed MiniImageNet 60% label noiseTest Accuracy53.18CLIPCleaner
Document Text ClassificationRed MiniImageNet 20% label noiseTest Accuracy61.44CLIPCleaner

Related Papers

CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Recalling The Forgotten Class Memberships: Unlearned Models Can Be Noisy Labelers to Leak Privacy2025-06-24On the Role of Label Noise in the Feature Learning Process2025-05-25Detect and Correct: A Selective Noise Correction Method for Learning with Noisy Labels2025-05-19Exploring Video-Based Driver Activity Recognition under Noisy Labels2025-04-16Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization2025-04-03Learning from Noisy Labels with Contrastive Co-Transformer2025-03-04Enhancing Sample Selection Against Label Noise by Cutting Mislabeled Easy Examples2025-02-12