TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-frame Joint Enhancement for Early Interlaced Videos

Multi-frame Joint Enhancement for Early Interlaced Videos

Yang Zhao, Yanbo Ma, Yuan Chen, Wei Jia, Ronggang Wang, Xiaoping Liu

2021-09-29Video DeinterlacingVideo Reconstruction
PaperPDF

Abstract

Early interlaced videos usually contain multiple and interlacing and complex compression artifacts, which significantly reduce the visual quality. Although the high-definition reconstruction technology for early videos has made great progress in recent years, related research on deinterlacing is still lacking. Traditional methods mainly focus on simple interlacing mechanism, and cannot deal with the complex artifacts in real-world early videos. Recent interlaced video reconstruction deep deinterlacing models only focus on single frame, while neglecting important temporal information. Therefore, this paper proposes a multiframe deinterlacing network joint enhancement network for early interlaced videos that consists of three modules, i.e., spatial vertical interpolation module, temporal alignment and fusion module, and final refinement module. The proposed method can effectively remove the complex artifacts in early videos by using temporal redundancy of multi-fields. Experimental results demonstrate that the proposed method can recover high quality results for both synthetic dataset and real-world early interlaced videos.

Results

TaskDatasetMetricValueModel
VideoMSU Deinterlacer BenchmarkFPS on CPU1.6MFDIN (L)
VideoMSU Deinterlacer BenchmarkPSNR43.884MFDIN (L)
VideoMSU Deinterlacer BenchmarkSSIM0.979MFDIN (L)
VideoMSU Deinterlacer BenchmarkSubjective1.054MFDIN (L)
VideoMSU Deinterlacer BenchmarkVMAF97.3MFDIN (L)
VideoMSU Deinterlacer BenchmarkFPS on CPU1.6MFDIN
VideoMSU Deinterlacer BenchmarkPSNR39.803MFDIN
VideoMSU Deinterlacer BenchmarkSSIM0.961MFDIN
VideoMSU Deinterlacer BenchmarkSubjective0.963MFDIN
VideoMSU Deinterlacer BenchmarkVMAF94.38MFDIN

Related Papers

GSVR: 2D Gaussian-based Video Representation for 800+ FPS with Hybrid Deformation Field2025-07-08Quanta Diffusion2025-06-07Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation2025-06-04Compressing Human Body Video with Interactive Semantics: A Generative Approach2025-05-22Motion Matters: Compact Gaussian Streaming for Free-Viewpoint Video Reconstruction2025-05-22V2V: Scaling Event-Based Vision through Efficient Video-to-Voxel Simulation2025-05-22Learning Adaptive and Temporally Causal Video Tokenization in a 1D Latent Space2025-05-22Few-shot Semantic Encoding and Decoding for Video Surveillance2025-05-12