TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Many-to-many Splatting for Efficient Video Frame Interpola...

Many-to-many Splatting for Efficient Video Frame Interpolation

Ping Hu, Simon Niklaus, Stan Sclaroff, Kate Saenko

2022-04-07CVPR 2022 1Optical Flow EstimationMotion EstimationVideo Frame Interpolation
PaperPDFCode(official)

Abstract

Motion-based video frame interpolation commonly relies on optical flow to warp pixels from the inputs to the desired interpolation instant. Yet due to the inherent challenges of motion estimation (e.g. occlusions and discontinuities), most state-of-the-art interpolation approaches require subsequent refinement of the warped result to generate satisfying outputs, which drastically decreases the efficiency for multi-frame interpolation. In this work, we propose a fully differentiable Many-to-Many (M2M) splatting framework to interpolate frames efficiently. Specifically, given a frame pair, we estimate multiple bidirectional flows to directly forward warp the pixels to the desired time step, and then fuse any overlapping pixels. In doing so, each source pixel renders multiple target pixels and each target pixel can be synthesized from a larger area of visual context. This establishes a many-to-many splatting scheme with robustness to artifacts like holes. Moreover, for each input frame pair, M2M only performs motion estimation once and has a minuscule computational overhead when interpolating an arbitrary number of in-between frames, hence achieving fast multi-frame interpolation. We conducted extensive experiments to analyze M2M, and found that it significantly improves efficiency while maintaining high effectiveness.

Results

TaskDatasetMetricValueModel
VideoVimeo90KPSNR35.4M2M-PWC
VideoVimeo90KSSIM0.978M2M-PWC
VideoXiph-2KPSNR36.45M2M-PWC
VideoXiph-2KSSIM0.967M2M-PWC
VideoATD-12KPSNR29.03M2M-PWC
VideoATD-12KSSIM0.959M2M-PWC
VideoUCF101PSNR35.17M2M-PWC
VideoUCF101SSIM0.97M2M-PWC
VideoXiph-4K (Crop)PSNR33.93M2M-PWC
VideoXiph-4K (Crop)SSIM0.945M2M-PWC
VideoX4K1000FPSPSNR30.81M2M-PWC
VideoX4K1000FPSSSIM0.912M2M-PWC
VideoX4K1000FPS-2KPSNR32.07M2M-PWC
VideoX4K1000FPS-2KSSIM0.923M2M-PWC
Video Frame InterpolationVimeo90KPSNR35.4M2M-PWC
Video Frame InterpolationVimeo90KSSIM0.978M2M-PWC
Video Frame InterpolationXiph-2KPSNR36.45M2M-PWC
Video Frame InterpolationXiph-2KSSIM0.967M2M-PWC
Video Frame InterpolationATD-12KPSNR29.03M2M-PWC
Video Frame InterpolationATD-12KSSIM0.959M2M-PWC
Video Frame InterpolationUCF101PSNR35.17M2M-PWC
Video Frame InterpolationUCF101SSIM0.97M2M-PWC
Video Frame InterpolationXiph-4K (Crop)PSNR33.93M2M-PWC
Video Frame InterpolationXiph-4K (Crop)SSIM0.945M2M-PWC
Video Frame InterpolationX4K1000FPSPSNR30.81M2M-PWC
Video Frame InterpolationX4K1000FPSSSIM0.912M2M-PWC
Video Frame InterpolationX4K1000FPS-2KPSNR32.07M2M-PWC
Video Frame InterpolationX4K1000FPS-2KSSIM0.923M2M-PWC

Related Papers

Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17An Efficient Approach for Muscle Segmentation and 3D Reconstruction Using Keypoint Tracking in MRI Scan2025-07-11HiM2SAM: Enhancing SAM2 with Hierarchical Motion Estimation and Memory Optimization towards Long-term Tracking2025-07-10Learning to Track Any Points from Human Motion2025-07-08TLB-VFI: Temporal-Aware Latent Brownian Bridge Diffusion for Video Frame Interpolation2025-07-07MEMFOF: High-Resolution Training for Memory-Efficient Multi-Frame Optical Flow Estimation2025-06-29EndoFlow-SLAM: Real-Time Endoscopic SLAM with Flow-Constrained Gaussian Splatting2025-06-26