We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. In the spirit of promoting open research and fostering transparency in large model training and evaluation, we provide access to code and model weights at https://github.com/Stability-AI/generative-models
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image Generation | WISE | Biology | 0.44 | stable-diffusion-xl-base-0.9 |
| Image Generation | WISE | Chemistry | 0.27 | stable-diffusion-xl-base-0.9 |
| Image Generation | WISE | Cultural | 0.43 | stable-diffusion-xl-base-0.9 |
| Image Generation | WISE | Overall | 0.43 | stable-diffusion-xl-base-0.9 |
| Image Generation | WISE | Physics | 0.45 | stable-diffusion-xl-base-0.9 |
| Image Generation | WISE | Space | 0.47 | stable-diffusion-xl-base-0.9 |
| Image Generation | WISE | Time | 0.48 | stable-diffusion-xl-base-0.9 |