TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SequenceMatch: Revisiting the design of weak-strong augmen...

SequenceMatch: Revisiting the design of weak-strong augmentations for Semi-supervised learning

Khanh-Binh Nguyen

2023-10-24Semi-Supervised Image Classification
PaperPDFCode(official)

Abstract

Semi-supervised learning (SSL) has become popular in recent years because it allows the training of a model using a large amount of unlabeled data. However, one issue that many SSL methods face is the confirmation bias, which occurs when the model is overfitted to the small labeled training dataset and produces overconfident, incorrect predictions. To address this issue, we propose SequenceMatch, an efficient SSL method that utilizes multiple data augmentations. The key element of SequenceMatch is the inclusion of a medium augmentation for unlabeled data. By taking advantage of different augmentations and the consistency constraints between each pair of augmented examples, SequenceMatch helps reduce the divergence between the prediction distribution of the model for weakly and strongly augmented examples. In addition, SequenceMatch defines two different consistency constraints for high and low-confidence predictions. As a result, SequenceMatch is more data-efficient than ReMixMatch, and more time-efficient than both ReMixMatch ($\times4$) and CoMatch ($\times2$) while having higher accuracy. Despite its simplicity, SequenceMatch consistently outperforms prior methods on standard benchmarks, such as CIFAR-10/100, SVHN, and STL-10. It also surpasses prior state-of-the-art methods by a large margin on large-scale datasets such as ImageNet, with a 38.46\% error rate. Code is available at https://github.com/beandkay/SequenceMatch.

Results

TaskDatasetMetricValueModel
Image ClassificationImageNet - 10% labeled dataTop 5 Accuracy91.9SequenceMatch (ResNet-50)
Semi-Supervised Image ClassificationImageNet - 10% labeled dataTop 5 Accuracy91.9SequenceMatch (ResNet-50)

Related Papers

ViTSGMM: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels2025-06-04Applications and Effect Evaluation of Generative Adversarial Networks in Semi-Supervised Learning2025-05-26Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization2025-05-12Weakly Semi-supervised Whole Slide Image Classification by Two-level Cross Consistency Supervision2025-04-16Diff-SySC: An Approach Using Diffusion Models for Semi-Supervised Image Classification2025-02-25SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations2024-10-03Self Adaptive Threshold Pseudo-labeling and Unreliable Sample Contrastive Loss for Semi-supervised Image Classification2024-07-04A Method of Moments Embedding Constraint and its Application to Semi-Supervised Learning2024-04-27