TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive...

SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation

Binhui Xie, Shuang Li, Mingjia Li, Chi Harold Liu, Gao Huang, Guoren Wang

2022-04-19Semantic SegmentationSynthetic-to-Real TranslationImage-to-Image TranslationDomain Adaptation
PaperPDFCode(official)

Abstract

Domain adaptive semantic segmentation attempts to make satisfactory dense predictions on an unlabeled target domain by utilizing the supervised model trained on a labeled source domain. In this work, we propose Semantic-Guided Pixel Contrast (SePiCo), a novel one-stage adaptation framework that highlights the semantic concepts of individual pixels to promote learning of class-discriminative and class-balanced pixel representations across domains, eventually boosting the performance of self-training methods. Specifically, to explore proper semantic concepts, we first investigate a centroid-aware pixel contrast that employs the category centroids of the entire source domain or a single source image to guide the learning of discriminative features. Considering the possible lack of category diversity in semantic concepts, we then blaze a trail of distributional perspective to involve a sufficient quantity of instances, namely distribution-aware pixel contrast, in which we approximate the true distribution of each semantic category from the statistics of labeled source data. Moreover, such an optimization objective can derive a closed-form upper bound by implicitly involving an infinite number of (dis)similar pairs, making it computationally efficient. Extensive experiments show that SePiCo not only helps stabilize training but also yields discriminative representations, making significant progress on both synthetic-to-real and daytime-to-nighttime adaptation scenarios.

Results

TaskDatasetMetricValueModel
Image-to-Image TranslationSYNTHIA-to-CityscapesmIoU (13 classes)71.4SePiCo
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU61SePiCo - DeepLabv2
Image-to-Image TranslationSYNTHIA-to-CityscapesMIoU (13 classes)71.4SePiCo
Image-to-Image TranslationSYNTHIA-to-CityscapesMIoU (16 classes)64.3SePiCo
Image-to-Image TranslationSYNTHIA-to-CityscapesMIoU (13 classes)66.5SePiCo (ResNet-101)
Image-to-Image TranslationSYNTHIA-to-CityscapesMIoU (16 classes)58.1SePiCo (ResNet-101)
Domain AdaptationSYNTHIA-to-CityscapesmIoU64.3SePiCo
Domain AdaptationSYNTHIA-to-CityscapesmIoU58.1SePiCo (DeepLabv2-ResNet-101)
Domain AdaptationGTA5 to CityscapesmIoU70.3SePiCo
Domain AdaptationGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
Domain AdaptationSYNTHIA-to-CityscapesmIoU (13 classes)71.4SePiCo
Domain AdaptationSYNTHIA-to-CityscapesmIoU (13 classes)66.5SePiCo (DeepLabv2 ResNet-101)
Image GenerationSYNTHIA-to-CityscapesmIoU (13 classes)71.4SePiCo
Image GenerationGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
Image GenerationGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
Image GenerationGTAV-to-Cityscapes LabelsmIoU61SePiCo - DeepLabv2
Image GenerationSYNTHIA-to-CityscapesMIoU (13 classes)71.4SePiCo
Image GenerationSYNTHIA-to-CityscapesMIoU (16 classes)64.3SePiCo
Image GenerationSYNTHIA-to-CityscapesMIoU (13 classes)66.5SePiCo (ResNet-101)
Image GenerationSYNTHIA-to-CityscapesMIoU (16 classes)58.1SePiCo (ResNet-101)
Semantic SegmentationDark ZurichmIoU54.2SePiCo
Semantic SegmentationDark ZurichmIoU45.4SePiCo (DeepLab v2 ResNet-101)
Semantic SegmentationGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
Semantic SegmentationSYNTHIA-to-CityscapesMean IoU64.3SePiCo
Unsupervised Domain AdaptationGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
Unsupervised Domain AdaptationSYNTHIA-to-CityscapesmIoU (13 classes)71.4SePiCo
Unsupervised Domain AdaptationSYNTHIA-to-CityscapesmIoU (13 classes)66.5SePiCo (DeepLabv2 ResNet-101)
10-shot image generationDark ZurichmIoU54.2SePiCo
10-shot image generationDark ZurichmIoU45.4SePiCo (DeepLab v2 ResNet-101)
10-shot image generationGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
10-shot image generationSYNTHIA-to-CityscapesMean IoU64.3SePiCo
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesmIoU (13 classes)71.4SePiCo
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU70.3SePiCo
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU61SePiCo - DeepLabv2
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesMIoU (13 classes)71.4SePiCo
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesMIoU (16 classes)64.3SePiCo
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesMIoU (13 classes)66.5SePiCo (ResNet-101)
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesMIoU (16 classes)58.1SePiCo (ResNet-101)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15