TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ReCo: Region-Controlled Text-to-Image Generation

ReCo: Region-Controlled Text-to-Image Generation

Zhengyuan Yang, JianFeng Wang, Zhe Gan, Linjie Li, Kevin Lin, Chenfei Wu, Nan Duan, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang

2022-11-23CVPR 2023 1Text-to-Image GenerationConditional Text-to-Image SynthesisLayout-to-Image GenerationText to Image GenerationImage Generation
PaperPDF

Abstract

Recently, large-scale text-to-image (T2I) models have shown impressive performance in generating high-fidelity images, but with limited controllability, e.g., precisely specifying the content in a specific region with a free-form text description. In this paper, we propose an effective technique for such regional control in T2I generation. We augment T2I models' inputs with an extra set of position tokens, which represent the quantized spatial coordinates. Each region is specified by four position tokens to represent the top-left and bottom-right corners, followed by an open-ended natural language regional description. Then, we fine-tune a pre-trained T2I model with such new input interface. Our model, dubbed as ReCo (Region-Controlled T2I), enables the region control for arbitrary objects described by open-ended regional texts rather than by object labels from a constrained category set. Empirically, ReCo achieves better image quality than the T2I model strengthened by positional words (FID: 8.82->7.36, SceneFID: 15.54->6.51 on COCO), together with objects being more accurately placed, amounting to a 20.40% region classification accuracy improvement on COCO. Furthermore, we demonstrate that ReCo can better control the object count, spatial relationship, and region attributes such as color/size, with the free-form regional description. Human evaluation on PaintSkill shows that ReCo is +19.28% and +17.21% more accurate in generating images with correct object count and spatial relationship than the T2I model.

Results

TaskDatasetMetricValueModel
Image GenerationCOCO-MIGinstance success rate0.55ReCo
Image GenerationCOCO-MIGmIoU0.49ReCo
Image GenerationLayoutBench-COCO - SizeAP24.1ReCo
Image GenerationLayoutBenchAP7.6ReCo
Image GenerationLayoutBench-COCO - CombinationAP18.7ReCo
Image GenerationLayoutBench-COCO - NumberAP30.9ReCo
Image GenerationLayoutBench-COCO - PositionAP36.4ReCo
Text-to-Image GenerationCOCO-MIGinstance success rate0.55ReCo
Text-to-Image GenerationCOCO-MIGmIoU0.49ReCo
10-shot image generationCOCO-MIGinstance success rate0.55ReCo
10-shot image generationCOCO-MIGmIoU0.49ReCo
1 Image, 2*2 StitchiCOCO-MIGinstance success rate0.55ReCo
1 Image, 2*2 StitchiCOCO-MIGmIoU0.49ReCo

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection2025-07-17FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization2025-07-17A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17FADE: Adversarial Concept Erasure in Flow Models2025-07-16CharaConsist: Fine-Grained Consistent Character Generation2025-07-15CATVis: Context-Aware Thought Visualization2025-07-15