Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, Yong Jae Lee
Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN's zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image Generation | COCO (Common Objects in Context) | FID | 5.61 | GLIGEN (fine-tuned, Detection + Caption data) |
| Image Generation | COCO (Common Objects in Context) | FID | 5.82 | GLIGEN (fine-tuned, Detection data only) |
| Image Generation | COCO (Common Objects in Context) | FID | 6.38 | GLIGEN (fine-tuned, Grounding data) |
| Image Generation | COCO-MIG | instance success rate | 0.3 | Gligen (zero-shot) |
| Image Generation | COCO-MIG | mIoU | 0.27 | Gligen (zero-shot) |
| Image Generation | LayoutBench-COCO - Size | AP | 33.3 | GLIGEN |
| Image Generation | LayoutBench-COCO - Combination | AP | 36.3 | GLIGEN |
| Image Generation | LayoutBench-COCO - Number | AP | 30.7 | GLIGEN |
| Image Generation | LayoutBench-COCO - Position | AP | 38.9 | GLIGEN |
| Text-to-Image Generation | COCO (Common Objects in Context) | FID | 5.61 | GLIGEN (fine-tuned, Detection + Caption data) |
| Text-to-Image Generation | COCO (Common Objects in Context) | FID | 5.82 | GLIGEN (fine-tuned, Detection data only) |
| Text-to-Image Generation | COCO (Common Objects in Context) | FID | 6.38 | GLIGEN (fine-tuned, Grounding data) |
| Text-to-Image Generation | COCO-MIG | instance success rate | 0.3 | Gligen (zero-shot) |
| Text-to-Image Generation | COCO-MIG | mIoU | 0.27 | Gligen (zero-shot) |
| 10-shot image generation | COCO (Common Objects in Context) | FID | 5.61 | GLIGEN (fine-tuned, Detection + Caption data) |
| 10-shot image generation | COCO (Common Objects in Context) | FID | 5.82 | GLIGEN (fine-tuned, Detection data only) |
| 10-shot image generation | COCO (Common Objects in Context) | FID | 6.38 | GLIGEN (fine-tuned, Grounding data) |
| 10-shot image generation | COCO-MIG | instance success rate | 0.3 | Gligen (zero-shot) |
| 10-shot image generation | COCO-MIG | mIoU | 0.27 | Gligen (zero-shot) |
| 1 Image, 2*2 Stitchi | COCO (Common Objects in Context) | FID | 5.61 | GLIGEN (fine-tuned, Detection + Caption data) |
| 1 Image, 2*2 Stitchi | COCO (Common Objects in Context) | FID | 5.82 | GLIGEN (fine-tuned, Detection data only) |
| 1 Image, 2*2 Stitchi | COCO (Common Objects in Context) | FID | 6.38 | GLIGEN (fine-tuned, Grounding data) |
| 1 Image, 2*2 Stitchi | COCO-MIG | instance success rate | 0.3 | Gligen (zero-shot) |
| 1 Image, 2*2 Stitchi | COCO-MIG | mIoU | 0.27 | Gligen (zero-shot) |