TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Few-shot Video-to-Video Synthesis

Few-shot Video-to-Video Synthesis

Ting-Chun Wang, Ming-Yu Liu, Andrew Tao, Guilin Liu, Jan Kautz, Bryan Catanzaro

2019-10-28NeurIPS 2019 12Video-to-Video Synthesis
PaperPDFCodeCodeCodeCodeCodeCode

Abstract

Video-to-video synthesis (vid2vid) aims at converting an input semantic video, such as videos of human poses or segmentation masks, to an output photorealistic video. While the state-of-the-art of vid2vid has advanced significantly, existing approaches share two major limitations. First, they are data-hungry. Numerous images of a target human subject or a scene are required for training. Second, a learned model has limited generalization capability. A pose-to-human vid2vid model can only synthesize poses of the single person in the training set. It does not generalize to other humans that are not in the training set. To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time. Our model achieves this few-shot generalization capability via a novel network weight generation module utilizing an attention mechanism. We conduct extensive experimental validations with comparisons to strong baselines using several large-scale video datasets including human-dancing videos, talking-head videos, and street-scene videos. The experimental results verify the effectiveness of the proposed framework in addressing the two limitations of existing vid2vid approaches.

Results

TaskDatasetMetricValueModel
VideoYouTube DancingFID80.44Few-shot Video-to-Video
VideoStreet SceneFID144.24Few-shot Video-to-Video

Related Papers

Video to Video Generative Adversarial Network for Few-shot Learning Based on Policy Gradient2024-10-28Ada-VE: Training-Free Consistent Video Editing Using Adaptive Motion Prior2024-06-07MeshBrush: Painting the Anatomical Mesh with Neural Stylization for Endoscopy2024-04-03Translation-based Video-to-Video Synthesis2024-04-03FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis2023-12-29Fairy: Fast Parallelized Instruction-Guided Video-to-Video Synthesis2023-12-20SketchBetween: Video-to-Video Synthesis for Sprite Animation via Sketches2022-09-01Fast-Vid2Vid: Spatial-Temporal Compression for Video-to-Video Synthesis2022-07-11