TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/UniINR: Event-guided Unified Rolling Shutter Correction, D...

UniINR: Event-guided Unified Rolling Shutter Correction, Deblurring, and Interpolation

Yunfan Lu, Guoqiang Liang, Yusheng Wang, Lin Wang, Hui Xiong

2023-05-24DeblurringImage RestorationRolling Shutter Correction
PaperPDFCode(official)Code(official)

Abstract

Video frames captured by rolling shutter (RS) cameras during fast camera movement frequently exhibit RS distortion and blur simultaneously. Naturally, recovering high-frame-rate global shutter (GS) sharp frames from an RS blur frame must simultaneously consider RS correction, deblur, and frame interpolation. A naive way is to decompose the whole process into separate tasks and cascade existing methods; however, this results in cumulative errors and noticeable artifacts. Event cameras enjoy many advantages, e.g., high temporal resolution, making them potential for our problem. To this end, we propose the first and novel approach, named UniINR, to recover arbitrary frame-rate sharp GS frames from an RS blur frame and paired events. Our key idea is unifying spatial-temporal implicit neural representation (INR) to directly map the position and time coordinates to color values to address the interlocking degradations. Specifically, we introduce spatial-temporal implicit encoding (STE) to convert an RS blur image and events into a spatial-temporal representation (STR). To query a specific sharp frame (GS or RS), we embed the exposure time into STR and decode the embedded features pixel-by-pixel to recover a sharp frame. Our method features a lightweight model with only 0.38M parameters, and it also enjoys high inference efficiency, achieving 2.83ms/frame in 31 times frame interpolation of an RS blur frame. Extensive experiments show that our method significantly outperforms prior methods. Code is available at https://github.com/yunfanLu/UniINR.

Related Papers

Unsupervised Part Discovery via Descriptor-Based Masked Image Restoration with Optimized Constraints2025-07-16Generative Latent Kernel Modeling for Blind Motion Deblurring2025-07-12LD-RPS: Zero-Shot Unified Image Restoration via Latent Diffusion Recurrent Posterior Sampling2025-07-01Double-Diffusion: Diffusion Conditioned Diffusion Probabilistic Model For Air Quality Prediction2025-06-29EAMamba: Efficient All-Around Vision State Space Model for Image Restoration2025-06-27Wild refitting for black box prediction2025-06-26Elucidating and Endowing the Diffusion Training Paradigm for General Image Restoration2025-06-26Dynamic Bandwidth Allocation for Hybrid Event-RGB Transmission2025-06-25