TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Virtual Adversarial Training: A Regularization Method for ...

Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning

Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Shin Ishii

2017-04-13Semi-Supervised Image Classification
PaperPDFCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCode(official)CodeCode

Abstract

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10, 4000 LabelsPercentage error10.55VAT+EntMin
Image ClassificationCIFAR-10, 4000 LabelsPercentage error11.36VAT
Image ClassificationSVHN, 1000 labelsAccuracy94.58VAT
Image Classificationcifar10, 250 LabelsPercentage correct63.97VAT
Image ClassificationSVHN, 250 LabelsAccuracy91.59VAT
Image ClassificationCIFAR-10, 250 LabelsPercentage error36.03VAT
Semi-Supervised Image ClassificationCIFAR-10, 4000 LabelsPercentage error10.55VAT+EntMin
Semi-Supervised Image ClassificationCIFAR-10, 4000 LabelsPercentage error11.36VAT
Semi-Supervised Image ClassificationSVHN, 1000 labelsAccuracy94.58VAT
Semi-Supervised Image Classificationcifar10, 250 LabelsPercentage correct63.97VAT
Semi-Supervised Image ClassificationSVHN, 250 LabelsAccuracy91.59VAT
Semi-Supervised Image ClassificationCIFAR-10, 250 LabelsPercentage error36.03VAT

Related Papers

ViTSGMM: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels2025-06-04Applications and Effect Evaluation of Generative Adversarial Networks in Semi-Supervised Learning2025-05-26Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization2025-05-12Weakly Semi-supervised Whole Slide Image Classification by Two-level Cross Consistency Supervision2025-04-16Diff-SySC: An Approach Using Diffusion Models for Semi-Supervised Image Classification2025-02-25SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations2024-10-03Self Adaptive Threshold Pseudo-labeling and Unreliable Sample Contrastive Loss for Semi-supervised Image Classification2024-07-04A Method of Moments Embedding Constraint and its Application to Semi-Supervised Learning2024-04-27