TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Context-Aware Mixup for Domain Adaptive Semantic Segmentat...

Context-Aware Mixup for Domain Adaptive Semantic Segmentation

Qianyu Zhou, Zhengyang Feng, Qiqi Gu, Jiangmiao Pang, Guangliang Cheng, Xuequan Lu, Jianping Shi, Lizhuang Ma

2021-08-08Semantic SegmentationSynthetic-to-Real TranslationUnsupervised Domain AdaptationImage-to-Image TranslationDomain Adaptation
PaperPDFCode(official)

Abstract

Unsupervised domain adaptation (UDA) aims to adapt a model of the labeled source domain to an unlabeled target domain. Existing UDA-based semantic segmentation approaches always reduce the domain shifts in pixel level, feature level, and output level. However, almost all of them largely neglect the contextual dependency, which is generally shared across different domains, leading to less-desired performance. In this paper, we propose a novel Context-Aware Mixup (CAMix) framework for domain adaptive semantic segmentation, which exploits this important clue of context-dependency as explicit prior knowledge in a fully end-to-end trainable manner for enhancing the adaptability toward the target domain. Firstly, we present a contextual mask generation strategy by leveraging the accumulated spatial distributions and prior contextual relationships. The generated contextual mask is critical in this work and will guide the context-aware domain mixup on three different levels. Besides, provided the context knowledge, we introduce a significance-reweighted consistency loss to penalize the inconsistency between the mixed student prediction and the mixed teacher prediction, which alleviates the negative transfer of the adaptation, e.g., early performance degradation. Extensive experiments and analysis demonstrate the effectiveness of our method against the state-of-the-art approaches on widely-used UDA benchmarks.

Results

TaskDatasetMetricValueModel
Image-to-Image TranslationSYNTHIA-to-CityscapesmIoU (13 classes)69.2CAMix (w DAFormer)
Image-to-Image TranslationSYNTHIA-to-CityscapesmIoU (13 classes)59.7CAMix (w Deeplabv2 ResNet 101)
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU70CAMix (w DAFormer)
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU55.2CAMix (w Deeplabv2 ResNet 101)
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU70CAMix (w DAFormer)
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU55.2CAMix (w Deeplabv2 ResNet101)
Image-to-Image TranslationSYNTHIA-to-CityscapesMIoU (13 classes)69.2CAMix (w DAFormer)
Image-to-Image TranslationSYNTHIA-to-CityscapesMIoU (13 classes)59.7CAMix (ResNet 101)
Domain AdaptationGTAV-to-Cityscapes LabelsmIoU70CAMix (w DAFormer)
Domain AdaptationGTAV-to-Cityscapes LabelsmIoU55.2CAMix (w Deeplabv2 ResNet 101)
Image GenerationSYNTHIA-to-CityscapesmIoU (13 classes)69.2CAMix (w DAFormer)
Image GenerationSYNTHIA-to-CityscapesmIoU (13 classes)59.7CAMix (w Deeplabv2 ResNet 101)
Image GenerationGTAV-to-Cityscapes LabelsmIoU70CAMix (w DAFormer)
Image GenerationGTAV-to-Cityscapes LabelsmIoU55.2CAMix (w Deeplabv2 ResNet 101)
Image GenerationGTAV-to-Cityscapes LabelsmIoU70CAMix (w DAFormer)
Image GenerationGTAV-to-Cityscapes LabelsmIoU55.2CAMix (w Deeplabv2 ResNet101)
Image GenerationSYNTHIA-to-CityscapesMIoU (13 classes)69.2CAMix (w DAFormer)
Image GenerationSYNTHIA-to-CityscapesMIoU (13 classes)59.7CAMix (ResNet 101)
Unsupervised Domain AdaptationGTAV-to-Cityscapes LabelsmIoU70CAMix (w DAFormer)
Unsupervised Domain AdaptationGTAV-to-Cityscapes LabelsmIoU55.2CAMix (w Deeplabv2 ResNet 101)
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesmIoU (13 classes)69.2CAMix (w DAFormer)
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesmIoU (13 classes)59.7CAMix (w Deeplabv2 ResNet 101)
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU70CAMix (w DAFormer)
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU55.2CAMix (w Deeplabv2 ResNet 101)
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU70CAMix (w DAFormer)
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU55.2CAMix (w Deeplabv2 ResNet101)
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesMIoU (13 classes)69.2CAMix (w DAFormer)
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesMIoU (13 classes)59.7CAMix (ResNet 101)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15