TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Contrastive Regularization for Semi-Supervised Learning

Contrastive Regularization for Semi-Supervised Learning

Doyup Lee, Sungwoong Kim, Ildoo Kim, Yeongjae Cheon, Minsu Cho, Wook-Shin Han

2022-01-17Semi-Supervised Image Classification
PaperPDF

Abstract

Consistency regularization on label predictions becomes a fundamental technique in semi-supervised learning, but it still requires a large number of training iterations for high performance. In this study, we analyze that the consistency regularization restricts the propagation of labeling information due to the exclusion of samples with unconfident pseudo-labels in the model updates. Then, we propose contrastive regularization to improve both efficiency and accuracy of the consistency regularization by well-clustered features of unlabeled data. In specific, after strongly augmented samples are assigned to clusters by their pseudo-labels, our contrastive regularization updates the model so that the features with confident pseudo-labels aggregate the features in the same cluster, while pushing away features in different clusters. As a result, the information of confident pseudo-labels can be effectively propagated into more unlabeled samples during training by the well-clustered features. On benchmarks of semi-supervised learning tasks, our contrastive regularization improves the previous consistency-based methods and achieves state-of-the-art results, especially with fewer training iterations. Our method also shows robust performance on open-set semi-supervised learning where unlabeled data includes out-of-distribution samples.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10, 4000 LabelsPercentage error4.16FixMatch+CR
Image ClassificationCIFAR-100, 2500 LabelsPercentage error27.58FixMatch+CR
Image Classificationcifar-100, 10000 LabelsPercentage error21.03FixMatch+CR
Image ClassificationCIFAR-100, 400 LabelsPercentage error49.23FixMatch+CR
Image ClassificationCIFAR-10, 40 LabelsPercentage error5.69FixMatch+CR
Image ClassificationCIFAR-10, 250 LabelsPercentage error5.04FixMatch+CR
Semi-Supervised Image ClassificationCIFAR-10, 4000 LabelsPercentage error4.16FixMatch+CR
Semi-Supervised Image ClassificationCIFAR-100, 2500 LabelsPercentage error27.58FixMatch+CR
Semi-Supervised Image Classificationcifar-100, 10000 LabelsPercentage error21.03FixMatch+CR
Semi-Supervised Image ClassificationCIFAR-100, 400 LabelsPercentage error49.23FixMatch+CR
Semi-Supervised Image ClassificationCIFAR-10, 40 LabelsPercentage error5.69FixMatch+CR
Semi-Supervised Image ClassificationCIFAR-10, 250 LabelsPercentage error5.04FixMatch+CR

Related Papers

ViTSGMM: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels2025-06-04Applications and Effect Evaluation of Generative Adversarial Networks in Semi-Supervised Learning2025-05-26Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization2025-05-12Weakly Semi-supervised Whole Slide Image Classification by Two-level Cross Consistency Supervision2025-04-16Diff-SySC: An Approach Using Diffusion Models for Semi-Supervised Image Classification2025-02-25SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations2024-10-03Self Adaptive Threshold Pseudo-labeling and Unreliable Sample Contrastive Loss for Semi-supervised Image Classification2024-07-04A Method of Moments Embedding Constraint and its Application to Semi-Supervised Learning2024-04-27