TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/BoxDiff: Text-to-Image Synthesis with Training-Free Box-Co...

BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion

Jinheng Xie, Yuexiang Li, Yawen Huang, Haozhe Liu, Wentian Zhang, Yefeng Zheng, Mike Zheng Shou

2023-07-20ICCV 2023 1DenoisingText-to-Image GenerationConditional Text-to-Image SynthesisImage Generation
PaperPDFCode(official)Code(official)

Abstract

Recent text-to-image diffusion models have demonstrated an astonishing capacity to generate high-quality images. However, researchers mainly studied the way of synthesizing images with only text prompts. While some works have explored using other modalities as conditions, considerable paired data, e.g., box/mask-image pairs, and fine-tuning time are required for nurturing models. As such paired data is time-consuming and labor-intensive to acquire and restricted to a closed set, this potentially becomes the bottleneck for applications in an open world. This paper focuses on the simplest form of user-provided conditions, e.g., box or scribble. To mitigate the aforementioned problem, we propose a training-free method to control objects and contexts in the synthesized images adhering to the given spatial conditions. Specifically, three spatial constraints, i.e., Inner-Box, Outer-Box, and Corner Constraints, are designed and seamlessly integrated into the denoising step of diffusion models, requiring no additional training and massive annotated layout data. Extensive experimental results demonstrate that the proposed constraints can control what and where to present in the images while retaining the ability of Diffusion models to synthesize with high fidelity and diverse concept coverage. The code is publicly available at https://github.com/showlab/BoxDiff.

Results

TaskDatasetMetricValueModel
Image GenerationCOCO-MIGinstance success rate0.16Box-Diffusion (zero-shot)
Image GenerationCOCO-MIGmIoU0.26Box-Diffusion (zero-shot)
Text-to-Image GenerationCOCO-MIGinstance success rate0.16Box-Diffusion (zero-shot)
Text-to-Image GenerationCOCO-MIGmIoU0.26Box-Diffusion (zero-shot)
10-shot image generationCOCO-MIGinstance success rate0.16Box-Diffusion (zero-shot)
10-shot image generationCOCO-MIGmIoU0.26Box-Diffusion (zero-shot)
1 Image, 2*2 StitchiCOCO-MIGinstance success rate0.16Box-Diffusion (zero-shot)
1 Image, 2*2 StitchiCOCO-MIGmIoU0.26Box-Diffusion (zero-shot)

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models2025-07-17Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection2025-07-17FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization2025-07-17A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16FADE: Adversarial Concept Erasure in Flow Models2025-07-16