TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SimPLE: Similar Pseudo Label Exploitation for Semi-Supervi...

SimPLE: Similar Pseudo Label Exploitation for Semi-Supervised Classification

Zijian Hu, Zhengyu Yang, Xuefeng Hu, Ram Nevatia

2021-03-30CVPR 2021 1Transfer LearningGeneral ClassificationClassificationSemi-Supervised Image Classification
PaperPDFCode(official)

Abstract

A common classification task situation is where one has a large amount of data available for training, but only a small portion is annotated with class labels. The goal of semi-supervised training, in this context, is to improve classification accuracy by leverage information not only from labeled data but also from a large amount of unlabeled data. Recent works have developed significant improvements by exploring the consistency constrain between differently augmented labeled and unlabeled data. Following this path, we propose a novel unsupervised objective that focuses on the less studied relationship between the high confidence unlabeled data that are similar to each other. The new proposed Pair Loss minimizes the statistical distance between high confidence pseudo labels with similarity above a certain threshold. Combining the Pair Loss with the techniques developed by the MixMatch family, our proposed SimPLE algorithm shows significant performance gains over previous algorithms on CIFAR-100 and Mini-ImageNet, and is on par with the state-of-the-art methods on CIFAR-10 and SVHN. Furthermore, SimPLE also outperforms the state-of-the-art methods in the transfer learning setting, where models are initialized by the weights pre-trained on ImageNet or DomainNet-Real. The code is available at github.com/zijian-hu/SimPLE.

Results

TaskDatasetMetricValueModel
Image ClassificationMini-ImageNet, 4000 LabelsAccuracy66.55SimPLE
Image Classificationcifar-100, 10000 LabelsPercentage error21.89SimPLE (WRN-28-8)
Semi-Supervised Image ClassificationMini-ImageNet, 4000 LabelsAccuracy66.55SimPLE
Semi-Supervised Image Classificationcifar-100, 10000 LabelsPercentage error21.89SimPLE (WRN-28-8)

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16Robust-Multi-Task Gradient Boosting2025-07-15AI-Enhanced Pediatric Pneumonia Detection: A CNN-Based Approach Using Data Augmentation and Generative Adversarial Networks (GANs)2025-07-13