TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Partial Label Supervision for Agnostic Generative Noisy La...

Partial Label Supervision for Agnostic Generative Noisy Label Learning

Fengbei Liu, Chong Wang, Yuanhong Chen, Yuyuan Liu, Gustavo Carneiro

2023-08-02Learning with noisy labelsImage GenerationPartial Label Learning
PaperPDFCode(official)

Abstract

Noisy label learning has been tackled with both discriminative and generative approaches. Despite the simplicity and efficiency of discriminative methods, generative models offer a more principled way of disentangling clean and noisy labels and estimating the label transition matrix. However, existing generative methods often require inferring additional latent variables through costly generative modules or heuristic assumptions, which hinder adaptive optimisation for different causal directions. They also assume a uniform clean label prior, which does not reflect the sample-wise clean label distribution and uncertainty. In this paper, we propose a novel framework for generative noisy label learning that addresses these challenges. First, we propose a new single-stage optimisation that directly approximates image generation by a discriminative classifier output. This approximation significantly reduces the computation cost of image generation, preserves the generative modelling benefits, and enables our framework to be agnostic in regards to different causality scenarios (i.e., image generate label or vice-versa). Second, we introduce a new Partial Label Supervision (PLS) for noisy label learning that accounts for both clean label coverage and uncertainty. The supervision of PLS does not merely aim at minimising loss, but seeks to capture the underlying sample-wise clean label distribution and uncertainty. Extensive experiments on computer vision and natural language processing (NLP) benchmarks demonstrate that our generative modelling achieves state-of-the-art results while significantly reducing the computation cost. Our code is available at https://github.com/lfb-1/GNL.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10N-Random2Accuracy (mean)91.42GNL
Image ClassificationCIFAR-10N-Random3Accuracy (mean)91.83GNL
Image ClassificationANIMALAccuracy85.9GNL
Image ClassificationCIFAR-10N-AggregateAccuracy (mean)92.57GNL
Image ClassificationCIFAR-10N-Random1Accuracy (mean)91.97GNL
Image ClassificationCIFAR-10N-WorstAccuracy (mean)86.99GNL
Document Text ClassificationCIFAR-10N-Random2Accuracy (mean)91.42GNL
Document Text ClassificationCIFAR-10N-Random3Accuracy (mean)91.83GNL
Document Text ClassificationANIMALAccuracy85.9GNL
Document Text ClassificationCIFAR-10N-AggregateAccuracy (mean)92.57GNL
Document Text ClassificationCIFAR-10N-Random1Accuracy (mean)91.97GNL
Document Text ClassificationCIFAR-10N-WorstAccuracy (mean)86.99GNL

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection2025-07-17FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization2025-07-17A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16FADE: Adversarial Concept Erasure in Flow Models2025-07-16CharaConsist: Fine-Grained Consistent Character Generation2025-07-15