TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MaskCon: Masked Contrastive Learning for Coarse-Labelled D...

MaskCon: Masked Contrastive Learning for Coarse-Labelled Dataset

Chen Feng, Ioannis Patras

2023-03-22CVPR 2023 1Learning with coarse labelsContrastive Learning
PaperPDFCode(official)

Abstract

Deep learning has achieved great success in recent years with the aid of advanced neural network structures and large-scale human-annotated datasets. However, it is often costly and difficult to accurately and efficiently annotate large-scale datasets, especially for some specialized domains where fine-grained labels are required. In this setting, coarse labels are much easier to acquire as they do not require expert knowledge. In this work, we propose a contrastive learning method, called $\textbf{Mask}$ed $\textbf{Con}$trastive learning~($\textbf{MaskCon}$) to address the under-explored problem setting, where we learn with a coarse-labelled dataset in order to address a finer labelling problem. More specifically, within the contrastive learning framework, for each sample our method generates soft-labels with the aid of coarse labels against other samples and another augmented view of the sample in question. By contrast to self-supervised contrastive learning where only the sample's augmentations are considered hard positives, and in supervised contrastive learning where only samples with the same coarse labels are considered hard positives, we propose soft labels based on sample distances, that are masked by the coarse labels. This allows us to utilize both inter-sample relations and coarse labels. We demonstrate that our method can obtain as special cases many existing state-of-the-art works and that it provides tighter bounds on the generalization error. Experimentally, our method achieves significant improvement over the current state-of-the-art in various datasets, including CIFAR10, CIFAR100, ImageNet-1K, Standford Online Products and Stanford Cars196 datasets. Code and annotations are available at https://github.com/MrChenFeng/MaskCon_CVPR2023.

Results

TaskDatasetMetricValueModel
Classificationcifar100Recall@165.52MaskCon
Classificationcifar100Recall@1089.25MaskCon
Classificationcifar100Recall@274.46MaskCon
Classificationcifar100Recall@583.64MaskCon
ClassificationStanford CarsRecall@145.53MaskCon
ClassificationStanford CarsRecall@1084.36MaskCon
ClassificationStanford CarsRecall@258.56MaskCon
ClassificationStanford CarsRecall@574.36MaskCon
ClassificationImageNet32Recall@119.08MaskCon
ClassificationImageNet32Recall@1047.96MaskCon
ClassificationImageNet32Recall@226.21MaskCon
ClassificationImageNet32Recall@538.17MaskCon
ClassificationStanford Online ProductsRecall@174.05MaskCon
ClassificationStanford Online ProductsRecall@1087.96MaskCon
ClassificationStanford Online ProductsRecall@278.97MaskCon
ClassificationStanford Online ProductsRecall@584.48MaskCon

Related Papers

SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation2025-07-15Latent Space Consistency for Sparse-View CT Reconstruction2025-07-15Self-supervised pretraining of vision transformers for animal behavioral analysis and neural encoding2025-07-13