TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MagDiff: Multi-Alignment Diffusion for High-Fidelity Video...

MagDiff: Multi-Alignment Diffusion for High-Fidelity Video Generation and Editing

Haoyu Zhao, Tianyi Lu, Jiaxi Gu, Xing Zhang, Qingping Zheng, Zuxuan Wu, Hang Xu, Yu-Gang Jiang

2023-11-29DenoisingVideo EditingImage to Video GenerationVideo Generation
PaperPDFCode(official)

Abstract

The diffusion model is widely leveraged for either video generation or video editing. As each field has its task-specific problems, it is difficult to merely develop a single diffusion for completing both tasks simultaneously. Video diffusion sorely relying on the text prompt can be adapted to unify the two tasks. However, it lacks a high capability of aligning heterogeneous modalities between text and image, leading to various misalignment problems. In this work, we are the first to propose a unified Multi-alignment Diffusion, dubbed as MagDiff, for both tasks of high-fidelity video generation and editing. The proposed MagDiff introduces three types of alignments, including subject-driven alignment, adaptive prompts alignment, and high-fidelity alignment. Particularly, the subject-driven alignment is put forward to trade off the image and text prompts, serving as a unified foundation generative model for both tasks. The adaptive prompts alignment is introduced to emphasize different strengths of homogeneous and heterogeneous alignments by assigning different values of weights to the image and the text prompts. The high-fidelity alignment is developed to further enhance the fidelity of both video generation and editing by taking the subject image as an additional model input. Experimental results on four benchmarks suggest that our method outperforms the previous method on each task.

Results

TaskDatasetMetricValueModel
VideoMSR-VTTFVD16252VideoAssembler (Zero-Shot, 256x256, class-conditional)
VideoMSR-VTTInception score15.79VideoAssembler (Zero-Shot, 256x256, class-conditional)
VideoUCF-101FVD16346.84VideoAssembler (Zero-shot, 256x256, class-conditional)
VideoUCF-101Inception Score48.01VideoAssembler (Zero-shot, 256x256, class-conditional)
Video GenerationMSR-VTTFVD16252VideoAssembler (Zero-Shot, 256x256, class-conditional)
Video GenerationMSR-VTTInception score15.79VideoAssembler (Zero-Shot, 256x256, class-conditional)
Video GenerationUCF-101FVD16346.84VideoAssembler (Zero-shot, 256x256, class-conditional)
Video GenerationUCF-101Inception Score48.01VideoAssembler (Zero-shot, 256x256, class-conditional)

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models2025-07-17World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Leveraging Pre-Trained Visual Models for AI-Generated Video Detection2025-07-17Taming Diffusion Transformer for Real-Time Mobile Video Generation2025-07-17LoViC: Efficient Long Video Generation with Context Compression2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16HUG-VAS: A Hierarchical NURBS-Based Generative Model for Aortic Geometry Synthesis and Controllable Editing2025-07-15