TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Confidence Regularized Self-Training

Confidence Regularized Self-Training

Yang Zou, Zhiding Yu, Xiaofeng Liu, B. V. K. Vijaya Kumar, Jinsong Wang

2019-08-26ICCV 2019 10Image ClassificationSemantic SegmentationSynthetic-to-Real TranslationUnsupervised Domain AdaptationDomain Adaptation
PaperPDFCodeCode(official)

Abstract

Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.

Results

TaskDatasetMetricValueModel
Image-to-Image TranslationSYNTHIA-to-CityscapesmIoU (13 classes)48.7LRENT (DeepLabv2)
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU49.8CRST(MRKLD-SP-MST)
Domain AdaptationOffice-31Average Accuracy86.8MRKLD + LRENT
Domain AdaptationVisDA2017Accuracy78.1MRKLD + LRENT
Domain AdaptationVisDA2017Accuracy78.1CRST
Image GenerationSYNTHIA-to-CityscapesmIoU (13 classes)48.7LRENT (DeepLabv2)
Image GenerationGTAV-to-Cityscapes LabelsmIoU49.8CRST(MRKLD-SP-MST)
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesmIoU (13 classes)48.7LRENT (DeepLabv2)
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU49.8CRST(MRKLD-SP-MST)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17