TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FILM: Frame Interpolation for Large Motion

FILM: Frame Interpolation for Large Motion

Fitsum Reda, Janne Kontkanen, Eric Tabellion, Deqing Sun, Caroline Pantofaru, Brian Curless

2022-02-10Optical Flow EstimationVideo Frame Interpolation
PaperPDFCodeCode(official)

Abstract

We present a frame interpolation algorithm that synthesizes multiple intermediate frames from two input images with large in-between motion. Recent methods use multiple networks to estimate optical flow or depth and a separate network dedicated to frame synthesis. This is often complex and requires scarce optical flow or depth ground-truth. In this work, we present a single unified network, distinguished by a multi-scale feature extractor that shares weights at all scales, and is trainable from frames alone. To synthesize crisp and pleasing frames, we propose to optimize our network with the Gram matrix loss that measures the correlation difference between feature maps. Our approach outperforms state-of-the-art methods on the Xiph large motion benchmark. We also achieve higher scores on Vimeo-90K, Middlebury and UCF101, when comparing to methods that use perceptual losses. We study the effect of weight sharing and of training with datasets of increasing motion range. Finally, we demonstrate our model's effectiveness in synthesizing high quality and temporally coherent videos on a challenging near-duplicate photos dataset. Codes and pre-trained models are available at https://film-net.github.io.

Results

TaskDatasetMetricValueModel
VideoVimeo90KPSNR36.06FILM
VideoVimeo90KSSIM0.97FILM
VideoXiph-2KPSNR36.66FILM
VideoXiph-2KSSIM0.951FILM
VideoXiph-4kPSNR33.78FILM
VideoXiph-4kSSIM0.906FILM
VideoMiddleburyPSNR37.52FILM
VideoMiddleburySSIM0.966FILM
VideoUCF101PSNR35.32FILM
VideoUCF101SSIM0.952FILM
VideoMSU Video Frame InterpolationLPIPS0.033FILM
VideoMSU Video Frame InterpolationMS-SSIM0.948FILM
VideoMSU Video Frame InterpolationPSNR28.11FILM
VideoMSU Video Frame InterpolationSSIM0.928FILM
VideoMSU Video Frame InterpolationVMAF68.68FILM
Video Frame InterpolationVimeo90KPSNR36.06FILM
Video Frame InterpolationVimeo90KSSIM0.97FILM
Video Frame InterpolationXiph-2KPSNR36.66FILM
Video Frame InterpolationXiph-2KSSIM0.951FILM
Video Frame InterpolationXiph-4kPSNR33.78FILM
Video Frame InterpolationXiph-4kSSIM0.906FILM
Video Frame InterpolationMiddleburyPSNR37.52FILM
Video Frame InterpolationMiddleburySSIM0.966FILM
Video Frame InterpolationUCF101PSNR35.32FILM
Video Frame InterpolationUCF101SSIM0.952FILM
Video Frame InterpolationMSU Video Frame InterpolationLPIPS0.033FILM
Video Frame InterpolationMSU Video Frame InterpolationMS-SSIM0.948FILM
Video Frame InterpolationMSU Video Frame InterpolationPSNR28.11FILM
Video Frame InterpolationMSU Video Frame InterpolationSSIM0.928FILM
Video Frame InterpolationMSU Video Frame InterpolationVMAF68.68FILM

Related Papers

Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17An Efficient Approach for Muscle Segmentation and 3D Reconstruction Using Keypoint Tracking in MRI Scan2025-07-11Learning to Track Any Points from Human Motion2025-07-08TLB-VFI: Temporal-Aware Latent Brownian Bridge Diffusion for Video Frame Interpolation2025-07-07MEMFOF: High-Resolution Training for Memory-Efficient Multi-Frame Optical Flow Estimation2025-06-29EndoFlow-SLAM: Real-Time Endoscopic SLAM with Flow-Constrained Gaussian Splatting2025-06-26WAFT: Warping-Alone Field Transforms for Optical Flow2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25