TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/EnAET: A Self-Trained framework for Semi-Supervised and Su...

EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations

Xiao Wang, Daisuke Kihara, Jiebo Luo, Guo-Jun Qi

2019-11-21Image ClassificationSemi-Supervised Image Classification
PaperPDFCode(official)Code

Abstract

Deep neural networks have been successfully applied to many real-world applications. However, such successes rely heavily on large amounts of labeled data that is expensive to obtain. Recently, many methods for semi-supervised learning have been proposed and achieved excellent performance. In this study, we propose a new EnAET framework to further improve existing semi-supervised methods with self-supervised information. To our best knowledge, all current semi-supervised methods improve performance with prediction consistency and confidence ideas. We are the first to explore the role of {\bf self-supervised} representations in {\bf semi-supervised} learning under a rich family of transformations. Consequently, our framework can integrate the self-supervised information as a regularization term to further improve {\it all} current semi-supervised methods. In the experiments, we use MixMatch, which is the current state-of-the-art method on semi-supervised learning, as a baseline to test the proposed EnAET framework. Across different datasets, we adopt the same hyper-parameters, which greatly improves the generalization ability of the EnAET framework. Experiment results on different datasets demonstrate that the proposed EnAET framework greatly improves the performance of current semi-supervised algorithms. Moreover, this framework can also improve {\bf supervised learning} by a large margin, including the extremely challenging scenarios with only 10 images per class. The code and experiment records are available in \url{https://github.com/maple-research-lab/EnAET}.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10Percentage correct98.01EnAET
Image ClassificationCIFAR-100Percentage correct83.13EnAET
Image ClassificationSTL-10Percentage correct95.48EnAET
Image ClassificationSVHNPercentage error2.22EnAET
Image ClassificationCIFAR-10, 4000 LabelsPercentage error4.18EnAET
Image ClassificationCIFAR-100, 1000 LabelsPercentage correct41.27EnAET
Image ClassificationSTL-10, 1000 LabelsAccuracy91.96EnAET
Image Classificationcifar-100, 10000 LabelsPercentage error22.92EnAET (WRN-28-2-Large)
Image ClassificationSTL-10Accuracy95.48EnAET
Image ClassificationSVHN, 1000 labelsAccuracy97.58EnAET
Image ClassificationCIFAR-100, 5000LabelsPercentage correct68.17EnAET
Image Classificationcifar10, 250 LabelsPercentage correct92.4EnAET
Image ClassificationSVHN, 250 LabelsAccuracy96.79EnAET
Semi-Supervised Image ClassificationCIFAR-10, 4000 LabelsPercentage error4.18EnAET
Semi-Supervised Image ClassificationCIFAR-100, 1000 LabelsPercentage correct41.27EnAET
Semi-Supervised Image ClassificationSTL-10, 1000 LabelsAccuracy91.96EnAET
Semi-Supervised Image Classificationcifar-100, 10000 LabelsPercentage error22.92EnAET (WRN-28-2-Large)
Semi-Supervised Image ClassificationSTL-10Accuracy95.48EnAET
Semi-Supervised Image ClassificationSVHN, 1000 labelsAccuracy97.58EnAET
Semi-Supervised Image ClassificationCIFAR-100, 5000LabelsPercentage correct68.17EnAET
Semi-Supervised Image Classificationcifar10, 250 LabelsPercentage correct92.4EnAET
Semi-Supervised Image ClassificationSVHN, 250 LabelsAccuracy96.79EnAET

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks2025-07-14FedGSCA: Medical Federated Learning with Global Sample Selector and Client Adaptive Adjuster under Label Noise2025-07-13