TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Diverse Image-to-Image Translation via Disentangled Repres...

Diverse Image-to-Image Translation via Disentangled Representations

Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Kumar Singh, Ming-Hsuan Yang

2018-08-02ECCV 2018 9Multimodal Unsupervised Image-To-Image TranslationPerceptual DistanceAttributeSynthetic-to-Real TranslationTranslationImage-to-Image TranslationDomain Adaptation
PaperPDFCodeCode(official)CodeCodeCodeCodeCode

Abstract

Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for many applications: 1) the lack of aligned training pairs and 2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. To achieve diversity, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time. To handle unpaired training data, we introduce a novel cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative comparisons, we measure realism with user study and diversity with a perceptual distance metric. We apply the proposed model to domain adaptation and show competitive performance when compared to the state-of-the-art on the MNIST-M and the LineMod datasets.

Results

TaskDatasetMetricValueModel
Image-to-Image TranslationGTAV-to-Cityscapes LabelsmIoU43.2Domain adaptation
Image-to-Image TranslationCelebA-HQFID52.1DRIT
Image-to-Image TranslationAFHQFID95.6DRIT
Image GenerationGTAV-to-Cityscapes LabelsmIoU43.2Domain adaptation
Image GenerationCelebA-HQFID52.1DRIT
Image GenerationAFHQFID95.6DRIT
1 Image, 2*2 StitchingGTAV-to-Cityscapes LabelsmIoU43.2Domain adaptation
1 Image, 2*2 StitchingCelebA-HQFID52.1DRIT
1 Image, 2*2 StitchingAFHQFID95.6DRIT

Related Papers

A Translation of Probabilistic Event Calculus into Markov Decision Processes2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15COLIBRI Fuzzy Model: Color Linguistic-Based Representation and Interpretation2025-07-15Function-to-Style Guidance of LLMs for Code Translation2025-07-15Domain Borders Are There to Be Crossed With Federated Few-Shot Adaptation2025-07-14