TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Generating Videos with Dynamics-aware Implicit Generative ...

Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

Sihyun Yu, Jihoon Tack, Sangwoo Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, Jinwoo Shin

2022-02-21ICLR 2022 4Video Generation
PaperPDFCode

Abstract

In the deep learning era, long video generation of high-quality still remains challenging due to the spatio-temporal complexity and continuity of videos. Existing prior works have attempted to model video distribution by representing videos as 3D grids of RGB values, which impedes the scale of generated videos and neglects continuous dynamics. In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue. By utilizing INRs of video, we propose dynamics-aware implicit generative adversarial network (DIGAN), a novel generative adversarial network for video generation. Specifically, we introduce (a) an INR-based video generator that improves the motion dynamics by manipulating the space and time coordinates differently and (b) a motion discriminator that efficiently identifies the unnatural motions without observing the entire long frame sequences. We demonstrate the superiority of DIGAN under various datasets, along with multiple intriguing properties, e.g., long video synthesis, video extrapolation, and non-autoregressive video generation. For example, DIGAN improves the previous state-of-the-art FVD score on UCF-101 by 30.7% and can be trained on 128 frame videos of 128x128 resolution, 80 frames longer than the 48 frames of the previous state-of-the-art method.

Results

TaskDatasetMetricValueModel
VideoUCF-101FVD16465DIGAN (128x128, class-conditional)
VideoUCF-101Inception Score59.68DIGAN (128x128, class-conditional)
VideoUCF-101KVD1639.6DIGAN (128x128, class-conditional)
VideoUCF-101FVD16577DIGAN (128x128, unconditional)
VideoUCF-101Inception Score32.7DIGAN (128x128, unconditional)
Video GenerationUCF-101FVD16465DIGAN (128x128, class-conditional)
Video GenerationUCF-101Inception Score59.68DIGAN (128x128, class-conditional)
Video GenerationUCF-101KVD1639.6DIGAN (128x128, class-conditional)
Video GenerationUCF-101FVD16577DIGAN (128x128, unconditional)
Video GenerationUCF-101Inception Score32.7DIGAN (128x128, unconditional)

Related Papers

World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Leveraging Pre-Trained Visual Models for AI-Generated Video Detection2025-07-17Taming Diffusion Transformer for Real-Time Mobile Video Generation2025-07-17LoViC: Efficient Long Video Generation with Context Compression2025-07-17$I^{2}$-World: Intra-Inter Tokenization for Efficient Dynamic 4D Scene Forecasting2025-07-12Lumos-1: On Autoregressive Video Generation from a Unified Model Perspective2025-07-11Scaling RL to Long Videos2025-07-10Martian World Models: Controllable Video Synthesis with Physically Accurate 3D Reconstructions2025-07-10