TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FCNs in the Wild: Pixel-level Adversarial and Constraint-b...

FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation

Judy Hoffman, Dequan Wang, Fisher Yu, Trevor Darrell

2016-12-08Semantic SegmentationSynthetic-to-Real TranslationImage-to-Image Translation
PaperPDFCodeCodeCode

Abstract

Fully convolutional models for dense prediction have proven successful for a wide range of visual tasks. Such models perform well in a supervised setting, but performance can be surprisingly poor under domain shifts that appear mild to a human observer. For example, training on one city and testing on another in a different geographic region and/or weather condition may result in significantly degraded performance due to pixel-level distribution shift. In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems. Our method consists of both global and category specific adaptation techniques. Global domain alignment is performed using a novel semantic segmentation network with fully convolutional domain adversarial learning. This initially adapted space then enables category specific adaptation through a generalization of constrained weak learning, with explicit transfer of the spatial layout from the source to the target domains. Our approach outperforms baselines across different settings on multiple large-scale datasets, including adapting across various real city environments, different synthetic sub-domains, from simulated to real environments, and on a novel large-scale dash-cam dataset.

Results

TaskDatasetMetricValueModel
Image-to-Image TranslationSYNTHIA Fall-to-WintermIoU59.6FCNs in the wild
Image-to-Image TranslationSYNTHIA-to-CityscapesmIoU (13 classes)20.2FCNs in the wild
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU27.1FCNs in the wild
Image GenerationSYNTHIA Fall-to-WintermIoU59.6FCNs in the wild
Image GenerationSYNTHIA-to-CityscapesmIoU (13 classes)20.2FCNs in the wild
Image GenerationGTAV-to-Cityscapes LabelsmIoU27.1FCNs in the wild
1 Image, 2*2 StitchingSYNTHIA Fall-to-WintermIoU59.6FCNs in the wild
1 Image, 2*2 StitchingSYNTHIA-to-CityscapesmIoU (13 classes)20.2FCNs in the wild
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU27.1FCNs in the wild

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15