TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Exploring Self-Supervised Regularization for Supervised an...

Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning

Phi Vu Tran

2019-06-25Image ClassificationMulti-Task LearningSemi-Supervised Image Classification
PaperPDFCode(official)

Abstract

Recent advances in semi-supervised learning have shown tremendous potential in overcoming a major barrier to the success of modern machine learning algorithms: access to vast amounts of human-labeled training data. Previous algorithms based on consistency regularization can harness the abundance of unlabeled data to produce impressive results on a number of semi-supervised benchmarks, approaching the performance of strong supervised baselines using only a fraction of the available labeled data. In this work, we challenge the long-standing success of consistency regularization by introducing self-supervised regularization as the basis for combining semantic feature representations from unlabeled data. We perform extensive comparative experiments to demonstrate the effectiveness of self-supervised regularization for supervised and semi-supervised image classification on SVHN, CIFAR-10, and CIFAR-100 benchmark datasets. We present two main results: (1) models augmented with self-supervised regularization significantly improve upon traditional supervised classifiers without the need for unlabeled data; (2) together with unlabeled data, our models yield semi-supervised performance competitive with, and in many cases exceeding, prior state-of-the-art consistency baselines. Lastly, our models have the practical utility of being efficiently trained end-to-end and require no additional hyper-parameters to tune for optimal performance beyond the standard set for training neural networks. Reference code and data are available at https://github.com/vuptran/sesemi

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10, 4000 LabelsPercentage error11.65SESEMI SSL (ConvNet)
Image ClassificationCIFAR-10, 2000 LabelsAccuracy85.78SESEMI SSL (ConvNet)
Image ClassificationSVHN, 500 LabelsAccuracy93.5SESEMI SSL (ConvNet)
Image ClassificationCIFAR-10, 1000 LabelsAccuracy82.12SESEMI SSL (ConvNet)
Image Classificationcifar-100, 10000 LabelsPercentage error38.7SESEMI SSL (ConvNet)
Image ClassificationSVHN, 1000 labelsAccuracy94.41SESEMI SSL (ConvNet)
Image ClassificationSVHN, 250 LabelsAccuracy91.68SESEMI SSL (ConvNet)
Semi-Supervised Image ClassificationCIFAR-10, 4000 LabelsPercentage error11.65SESEMI SSL (ConvNet)
Semi-Supervised Image ClassificationCIFAR-10, 2000 LabelsAccuracy85.78SESEMI SSL (ConvNet)
Semi-Supervised Image ClassificationSVHN, 500 LabelsAccuracy93.5SESEMI SSL (ConvNet)
Semi-Supervised Image ClassificationCIFAR-10, 1000 LabelsAccuracy82.12SESEMI SSL (ConvNet)
Semi-Supervised Image Classificationcifar-100, 10000 LabelsPercentage error38.7SESEMI SSL (ConvNet)
Semi-Supervised Image ClassificationSVHN, 1000 labelsAccuracy94.41SESEMI SSL (ConvNet)
Semi-Supervised Image ClassificationSVHN, 250 LabelsAccuracy91.68SESEMI SSL (ConvNet)

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15Robust-Multi-Task Gradient Boosting2025-07-15