TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Super SloMo: High Quality Estimation of Multiple Intermedi...

Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation

Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik Learned-Miller, Jan Kautz

2017-11-30CVPR 2018 6Optical Flow EstimationVocal Bursts Intensity PredictionVideo Frame Interpolation
PaperPDFCodeCodeCodeCodeCode

Abstract

Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. We use 1,132 video clips with 240-fps, containing 300K individual video frames, to train our network. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.

Results

TaskDatasetMetricValueModel
VideoMSU Video Frame InterpolationFPS3.1Super-SloMo
VideoMSU Video Frame InterpolationLPIPS0.068Super-SloMo
VideoMSU Video Frame InterpolationMS-SSIM0.924Super-SloMo
VideoMSU Video Frame InterpolationPSNR26.69Super-SloMo
VideoMSU Video Frame InterpolationSSIM0.904Super-SloMo
VideoMSU Video Frame InterpolationSubjective score1.11Super-SloMo
VideoMSU Video Frame InterpolationVMAF61.35Super-SloMo
Video Frame InterpolationMSU Video Frame InterpolationFPS3.1Super-SloMo
Video Frame InterpolationMSU Video Frame InterpolationLPIPS0.068Super-SloMo
Video Frame InterpolationMSU Video Frame InterpolationMS-SSIM0.924Super-SloMo
Video Frame InterpolationMSU Video Frame InterpolationPSNR26.69Super-SloMo
Video Frame InterpolationMSU Video Frame InterpolationSSIM0.904Super-SloMo
Video Frame InterpolationMSU Video Frame InterpolationSubjective score1.11Super-SloMo
Video Frame InterpolationMSU Video Frame InterpolationVMAF61.35Super-SloMo

Related Papers

Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17An Efficient Approach for Muscle Segmentation and 3D Reconstruction Using Keypoint Tracking in MRI Scan2025-07-11Learning to Track Any Points from Human Motion2025-07-08TLB-VFI: Temporal-Aware Latent Brownian Bridge Diffusion for Video Frame Interpolation2025-07-07MEMFOF: High-Resolution Training for Memory-Efficient Multi-Frame Optical Flow Estimation2025-06-29EndoFlow-SLAM: Real-Time Endoscopic SLAM with Flow-Constrained Gaussian Splatting2025-06-26WAFT: Warping-Alone Field Transforms for Optical Flow2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25