TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Recurrent Affine Transformation for Text-to-image Synthesis

Recurrent Affine Transformation for Text-to-image Synthesis

Senmao Ye, Fei Liu, Minkui Tan

2022-04-22Text-to-Image Generation
PaperPDFCodeCode(official)

Abstract

Text-to-image synthesis aims to generate natural images conditioned on text descriptions. The main difficulty of this task lies in effectively fusing text information into the image synthesis process. Existing methods usually adaptively fuse suitable text information into the synthesis process with multiple isolated fusion blocks (e.g., Conditional Batch Normalization and Instance Normalization). However, isolated fusion blocks not only conflict with each other but also increase the difficulty of training (see first page of the supplementary). To address these issues, we propose a Recurrent Affine Transformation (RAT) for Generative Adversarial Networks that connects all the fusion blocks with a recurrent neural network to model their long-term dependency. Besides, to improve semantic consistency between texts and synthesized images, we incorporate a spatial attention model in the discriminator. Being aware of matching image regions, text descriptions supervise the generator to synthesize more relevant image contents. Extensive experiments on the CUB, Oxford-102 and COCO datasets demonstrate the superiority of the proposed model in comparison to state-of-the-art models \footnote{https://github.com/senmaoy/Recurrent-Affine-Transformation-for-Text-to-image-Synthesis.git}

Results

TaskDatasetMetricValueModel
Image GenerationCOCO (Common Objects in Context)FID14.6RAT-GAN
Image GenerationOxford 102 FlowersFID16.04RAT-GAN
Image GenerationOxford 102 FlowersInception score4.09RAT-GAN
Image GenerationCUBFID10.21RAT-GAN
Image GenerationCUBInception score5.36RAT-GAN
Text-to-Image GenerationCOCO (Common Objects in Context)FID14.6RAT-GAN
Text-to-Image GenerationOxford 102 FlowersFID16.04RAT-GAN
Text-to-Image GenerationOxford 102 FlowersInception score4.09RAT-GAN
Text-to-Image GenerationCUBFID10.21RAT-GAN
Text-to-Image GenerationCUBInception score5.36RAT-GAN
10-shot image generationCOCO (Common Objects in Context)FID14.6RAT-GAN
10-shot image generationOxford 102 FlowersFID16.04RAT-GAN
10-shot image generationOxford 102 FlowersInception score4.09RAT-GAN
10-shot image generationCUBFID10.21RAT-GAN
10-shot image generationCUBInception score5.36RAT-GAN
1 Image, 2*2 StitchiCOCO (Common Objects in Context)FID14.6RAT-GAN
1 Image, 2*2 StitchiOxford 102 FlowersFID16.04RAT-GAN
1 Image, 2*2 StitchiOxford 102 FlowersInception score4.09RAT-GAN
1 Image, 2*2 StitchiCUBFID10.21RAT-GAN
1 Image, 2*2 StitchiCUBInception score5.36RAT-GAN

Related Papers

CharaConsist: Fine-Grained Consistent Character Generation2025-07-15Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09NeoBabel: A Multilingual Open Tower for Visual Generation2025-07-08DC-AR: Efficient Masked Autoregressive Image Generation with Deep Compression Hybrid Tokenizer2025-07-07UniGlyph: Unified Segmentation-Conditioned Diffusion for Precise Visual Text Synthesis2025-07-01Ovis-U1 Technical Report2025-06-29Rethink Sparse Signals for Pose-guided Text-to-image Generation2025-06-26XVerse: Consistent Multi-Subject Control of Identity and Semantic Attributes via DiT Modulation2025-06-26