TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Controllable Text-to-Image Generation

Controllable Text-to-Image Generation

Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H. S. Torr

2019-09-16NeurIPS 2019 12Text-to-Image GenerationText to Image GenerationImage Generation
PaperPDFCode(official)Code

Abstract

In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions. To achieve this, we introduce a word-level spatial and channel-wise attention-driven generator that can disentangle different visual attributes, and allow the model to focus on generating and manipulating subregions corresponding to the most relevant words. Also, a word-level discriminator is proposed to provide fine-grained supervisory feedback by correlating words with image regions, facilitating training an effective generator which is able to manipulate specific visual attributes without affecting the generation of other content. Furthermore, perceptual loss is adopted to reduce the randomness involved in the image generation, and to encourage the generator to manipulate specific attributes required in the modified text. Extensive experiments on benchmark datasets demonstrate that our method outperforms existing state of the art, and is able to effectively manipulate synthetic images using natural language descriptions. Code is available at https://github.com/mrlibw/ControlGAN.

Results

TaskDatasetMetricValueModel
Image GenerationCUBInception score4.58Attention-driven Generator (perceptual loss)
Image GenerationMulti-Modal-CelebA-HQAcc14.6ControlGAN
Image GenerationMulti-Modal-CelebA-HQFID116.32ControlGAN
Image GenerationMulti-Modal-CelebA-HQLPIPS0.522ControlGAN
Image GenerationMulti-Modal-CelebA-HQReal13.1ControlGAN
Text-to-Image GenerationCUBInception score4.58Attention-driven Generator (perceptual loss)
Text-to-Image GenerationMulti-Modal-CelebA-HQAcc14.6ControlGAN
Text-to-Image GenerationMulti-Modal-CelebA-HQFID116.32ControlGAN
Text-to-Image GenerationMulti-Modal-CelebA-HQLPIPS0.522ControlGAN
Text-to-Image GenerationMulti-Modal-CelebA-HQReal13.1ControlGAN
10-shot image generationMulti-Modal-CelebA-HQAcc14.6ControlGAN
10-shot image generationMulti-Modal-CelebA-HQFID116.32ControlGAN
10-shot image generationMulti-Modal-CelebA-HQLPIPS0.522ControlGAN
10-shot image generationMulti-Modal-CelebA-HQReal13.1ControlGAN
10-shot image generationCUBInception score4.58Attention-driven Generator (perceptual loss)
1 Image, 2*2 StitchiMulti-Modal-CelebA-HQAcc14.6ControlGAN
1 Image, 2*2 StitchiMulti-Modal-CelebA-HQFID116.32ControlGAN
1 Image, 2*2 StitchiMulti-Modal-CelebA-HQLPIPS0.522ControlGAN
1 Image, 2*2 StitchiMulti-Modal-CelebA-HQReal13.1ControlGAN
1 Image, 2*2 StitchiCUBInception score4.58Attention-driven Generator (perceptual loss)

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection2025-07-17FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization2025-07-17A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17FADE: Adversarial Concept Erasure in Flow Models2025-07-16CharaConsist: Fine-Grained Consistent Character Generation2025-07-15CATVis: Context-Aware Thought Visualization2025-07-15