TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/StyleDiffusion: Prompt-Embedding Inversion for Text-Based ...

StyleDiffusion: Prompt-Embedding Inversion for Text-Based Editing

Senmao Li, Joost Van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin Hou, Yaxing Wang, Jian Yang, Ming-Ming Cheng

2023-03-28Text-based Image Editing
PaperPDFCode(official)

Abstract

A significant research effort is focused on exploiting the amazing capacities of pretrained diffusion models for the editing of images.They either finetune the model, or invert the image in the latent space of the pretrained model. However, they suffer from two problems: (1) Unsatisfying results for selected regions and unexpected changes in non-selected regions.(2) They require careful text prompt editing where the prompt should include all visual objects in the input image.To address this, we propose two improvements: (1) Only optimizing the input of the value linear network in the cross-attention layers is sufficiently powerful to reconstruct a real image. (2) We propose attention regularization to preserve the object-like attention maps after reconstruction and editing, enabling us to obtain accurate style editing without invoking significant structural changes. We further improve the editing technique that is used for the unconditional branch of classifier-free guidance as used by P2P. Extensive experimental prompt-editing results on a variety of images demonstrate qualitatively and quantitatively that our method has superior editing capabilities compared to existing and concurrent works. See our accompanying code in Stylediffusion: \url{https://github.com/sen-mao/StyleDiffusion}.

Results

TaskDatasetMetricValueModel
Image GenerationPIE-BenchBackground LPIPS66.1StyleDiffusion+Prompt-to-Prompt
Image GenerationPIE-BenchBackground PSNR26.05StyleDiffusion+Prompt-to-Prompt
Image GenerationPIE-BenchCLIPSIM24.78StyleDiffusion+Prompt-to-Prompt
Image GenerationPIE-BenchStructure Distance11.65StyleDiffusion+Prompt-to-Prompt
Text-to-Image GenerationPIE-BenchBackground LPIPS66.1StyleDiffusion+Prompt-to-Prompt
Text-to-Image GenerationPIE-BenchBackground PSNR26.05StyleDiffusion+Prompt-to-Prompt
Text-to-Image GenerationPIE-BenchCLIPSIM24.78StyleDiffusion+Prompt-to-Prompt
Text-to-Image GenerationPIE-BenchStructure Distance11.65StyleDiffusion+Prompt-to-Prompt
10-shot image generationPIE-BenchBackground LPIPS66.1StyleDiffusion+Prompt-to-Prompt
10-shot image generationPIE-BenchBackground PSNR26.05StyleDiffusion+Prompt-to-Prompt
10-shot image generationPIE-BenchCLIPSIM24.78StyleDiffusion+Prompt-to-Prompt
10-shot image generationPIE-BenchStructure Distance11.65StyleDiffusion+Prompt-to-Prompt
1 Image, 2*2 StitchiPIE-BenchBackground LPIPS66.1StyleDiffusion+Prompt-to-Prompt
1 Image, 2*2 StitchiPIE-BenchBackground PSNR26.05StyleDiffusion+Prompt-to-Prompt
1 Image, 2*2 StitchiPIE-BenchCLIPSIM24.78StyleDiffusion+Prompt-to-Prompt
1 Image, 2*2 StitchiPIE-BenchStructure Distance11.65StyleDiffusion+Prompt-to-Prompt

Related Papers

NoHumansRequired: Autonomous High-Quality Image Editing Triplet Mining2025-07-18Cora: Correspondence-aware image editing using few step diffusion2025-05-29POEM: Precise Object-level Editing via MLLM control2025-04-10ILLUME+: Illuminating Unified MLLM with Dual Visual Tokenization and Diffusion Refinement2025-04-02KV-Edit: Training-Free Image Editing for Precise Background Preservation2025-02-24PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models2025-02-06FeedEdit: Text-Based Image Editing with Dynamic Feedback Regulation2025-01-01FireFlow: Fast Inversion of Rectified Flow for Image Semantic Editing2024-12-10