TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unpaired Image-to-Image Translation using Cycle-Consistent...

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros

2017-03-30ICCV 2017 10Multimodal Unsupervised Image-To-Image TranslationStyle TransferUnsupervised Image-To-Image TranslationImage ColorizationTranslationImage-to-Image Translation
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.

Results

TaskDatasetMetricValueModel
Image-to-Image Translationvangogh2photoFrechet Inception Distance163.4CycleGAN
Image-to-Image Translationzebra2horseFrechet Inception Distance110.5CycleGAN
Image-to-Image Translationphoto2vangoghFrechet Inception Distance151.4CycleGAN
Image-to-Image Translationhorse2zebraFrechet Inception Distance89.7CycleGAN
Image-to-Image TranslationCityscapes Labels-to-PhotoClass IOU0.11CycleGAN
Image-to-Image TranslationCityscapes Photo-to-LabelsClass IOU0.16CycleGAN
Image-to-Image TranslationFreiburg Forest DatasetPSNR18.57cycGAN
Image-to-Image TranslationEPFL NIR-VISPSNR17.38cycGAN
Image-to-Image TranslationEdge-to-ShoesDiversity0.01CycleGAN
Image-to-Image TranslationCats-and-DogsCIS0.076CycleGAN
Image-to-Image TranslationCats-and-DogsIS0.813CycleGAN
Image-to-Image TranslationEdge-to-HandbagsDiversity0.012CycleGAN
Image Generationvangogh2photoFrechet Inception Distance163.4CycleGAN
Image Generationzebra2horseFrechet Inception Distance110.5CycleGAN
Image Generationphoto2vangoghFrechet Inception Distance151.4CycleGAN
Image Generationhorse2zebraFrechet Inception Distance89.7CycleGAN
Image GenerationCityscapes Labels-to-PhotoClass IOU0.11CycleGAN
Image GenerationCityscapes Photo-to-LabelsClass IOU0.16CycleGAN
Image GenerationFreiburg Forest DatasetPSNR18.57cycGAN
Image GenerationEPFL NIR-VISPSNR17.38cycGAN
Image GenerationEdge-to-ShoesDiversity0.01CycleGAN
Image GenerationCats-and-DogsCIS0.076CycleGAN
Image GenerationCats-and-DogsIS0.813CycleGAN
Image GenerationEdge-to-HandbagsDiversity0.012CycleGAN
Unsupervised Image-To-Image TranslationFreiburg Forest DatasetPSNR18.57cycGAN
Image ColorizationNIR2RGB VCIP Challange DatasetPSNR19.59CycleGAN
1 Image, 2*2 Stitchingvangogh2photoFrechet Inception Distance163.4CycleGAN
1 Image, 2*2 Stitchingzebra2horseFrechet Inception Distance110.5CycleGAN
1 Image, 2*2 Stitchingphoto2vangoghFrechet Inception Distance151.4CycleGAN
1 Image, 2*2 Stitchinghorse2zebraFrechet Inception Distance89.7CycleGAN
1 Image, 2*2 StitchingCityscapes Labels-to-PhotoClass IOU0.11CycleGAN
1 Image, 2*2 StitchingCityscapes Photo-to-LabelsClass IOU0.16CycleGAN
1 Image, 2*2 StitchingFreiburg Forest DatasetPSNR18.57cycGAN
1 Image, 2*2 StitchingEPFL NIR-VISPSNR17.38cycGAN
1 Image, 2*2 StitchingEdge-to-ShoesDiversity0.01CycleGAN
1 Image, 2*2 StitchingCats-and-DogsCIS0.076CycleGAN
1 Image, 2*2 StitchingCats-and-DogsIS0.813CycleGAN
1 Image, 2*2 StitchingEdge-to-HandbagsDiversity0.012CycleGAN

Related Papers

A Translation of Probabilistic Event Calculus into Markov Decision Processes2025-07-17Function-to-Style Guidance of LLMs for Code Translation2025-07-15Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks2025-07-14Speak2Sign3D: A Multi-modal Pipeline for English Speech to American Sign Language Animation2025-07-09Pun Intended: Multi-Agent Translation of Wordplay with Contrastive Learning and Phonetic-Semantic Embeddings2025-07-09Unconditional Diffusion for Generative Sequential Recommendation2025-07-08GRAFT: A Graph-based Flow-aware Agentic Framework for Document-level Machine Translation2025-07-04AnyI2V: Animating Any Conditional Image with Motion Control2025-07-03