TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Negative-prompt Inversion: Fast Image Inversion for Editin...

Negative-prompt Inversion: Fast Image Inversion for Editing with Text-guided Diffusion Models

Daiki Miyake, Akihiro Iohara, Yu Saito, Toshiyuki Tanaka

2023-05-26Text-based Image Editing
PaperPDF

Abstract

In image editing employing diffusion models, it is crucial to preserve the reconstruction fidelity to the original image while changing its style. Although existing methods ensure reconstruction fidelity through optimization, a drawback of these is the significant amount of time required for optimization. In this paper, we propose negative-prompt inversion, a method capable of achieving equivalent reconstruction solely through forward propagation without optimization, thereby enabling ultrafast editing processes. We experimentally demonstrate that the reconstruction fidelity of our method is comparable to that of existing methods, allowing for inversion at a resolution of 512 pixels and with 50 sampling steps within approximately 5 seconds, which is more than 30 times faster than null-text inversion. Reduction of the computation time by the proposed method further allows us to use a larger number of sampling steps in diffusion models to improve the reconstruction fidelity with a moderate increase in computation time.

Results

TaskDatasetMetricValueModel
Image GenerationPIE-BenchBackground LPIPS69.01Negative-Prompt Inversion+Prompt-to-Prompt
Image GenerationPIE-BenchBackground PSNR26.21Negative-Prompt Inversion+Prompt-to-Prompt
Image GenerationPIE-BenchCLIPSIM24.61Negative-Prompt Inversion+Prompt-to-Prompt
Image GenerationPIE-BenchStructure Distance16.17Negative-Prompt Inversion+Prompt-to-Prompt
Text-to-Image GenerationPIE-BenchBackground LPIPS69.01Negative-Prompt Inversion+Prompt-to-Prompt
Text-to-Image GenerationPIE-BenchBackground PSNR26.21Negative-Prompt Inversion+Prompt-to-Prompt
Text-to-Image GenerationPIE-BenchCLIPSIM24.61Negative-Prompt Inversion+Prompt-to-Prompt
Text-to-Image GenerationPIE-BenchStructure Distance16.17Negative-Prompt Inversion+Prompt-to-Prompt
10-shot image generationPIE-BenchBackground LPIPS69.01Negative-Prompt Inversion+Prompt-to-Prompt
10-shot image generationPIE-BenchBackground PSNR26.21Negative-Prompt Inversion+Prompt-to-Prompt
10-shot image generationPIE-BenchCLIPSIM24.61Negative-Prompt Inversion+Prompt-to-Prompt
10-shot image generationPIE-BenchStructure Distance16.17Negative-Prompt Inversion+Prompt-to-Prompt
1 Image, 2*2 StitchiPIE-BenchBackground LPIPS69.01Negative-Prompt Inversion+Prompt-to-Prompt
1 Image, 2*2 StitchiPIE-BenchBackground PSNR26.21Negative-Prompt Inversion+Prompt-to-Prompt
1 Image, 2*2 StitchiPIE-BenchCLIPSIM24.61Negative-Prompt Inversion+Prompt-to-Prompt
1 Image, 2*2 StitchiPIE-BenchStructure Distance16.17Negative-Prompt Inversion+Prompt-to-Prompt

Related Papers

NoHumansRequired: Autonomous High-Quality Image Editing Triplet Mining2025-07-18Cora: Correspondence-aware image editing using few step diffusion2025-05-29POEM: Precise Object-level Editing via MLLM control2025-04-10ILLUME+: Illuminating Unified MLLM with Dual Visual Tokenization and Diffusion Refinement2025-04-02KV-Edit: Training-Free Image Editing for Precise Background Preservation2025-02-24PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models2025-02-06FeedEdit: Text-Based Image Editing with Dynamic Feedback Regulation2025-01-01FireFlow: Fast Inversion of Rectified Flow for Image Semantic Editing2024-12-10