TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Deformable Convolutions and LSTM-based Flexible Event Fram...

Deformable Convolutions and LSTM-based Flexible Event Frame Fusion Network for Motion Deblurring

Dan Yang, Mehmet Yamac

2023-06-01DeblurringImage Deblurring
PaperPDF

Abstract

Event cameras differ from conventional RGB cameras in that they produce asynchronous data sequences. While RGB cameras capture every frame at a fixed rate, event cameras only capture changes in the scene, resulting in sparse and asynchronous data output. Despite the fact that event data carries useful information that can be utilized in motion deblurring of RGB cameras, integrating event and image information remains a challenge. Recent state-of-the-art CNN-based deblurring solutions produce multiple 2-D event frames based on the accumulation of event data over a time period. In most of these techniques, however, the number of event frames is fixed and predefined, which reduces temporal resolution drastically, particularly for scenarios when fast-moving objects are present or when longer exposure times are required. It is also important to note that recent modern cameras (e.g., cameras in mobile phones) dynamically set the exposure time of the image, which presents an additional problem for networks developed for a fixed number of event frames. A Long Short-Term Memory (LSTM)-based event feature extraction module has been developed for addressing these challenges, which enables us to use a dynamically varying number of event frames. Using these modules, we constructed a state-of-the-art deblurring network, Deformable Convolutions and LSTM-based Flexible Event Frame Fusion Network (DLEFNet). It is particularly useful for scenarios in which exposure times vary depending on factors such as lighting conditions or the presence of fast-moving objects in the scene. It has been demonstrated through evaluation results that the proposed method can outperform the existing state-of-the-art networks for deblurring task in synthetic and real-world data sets.

Results

TaskDatasetMetricValueModel
DeblurringGoProPSNR35.61DLEFNet
DeblurringGoProSSIM0.973DLEFNet
2D ClassificationGoProPSNR35.61DLEFNet
2D ClassificationGoProSSIM0.973DLEFNet
10-shot image generationGoProPSNR35.61DLEFNet
10-shot image generationGoProSSIM0.973DLEFNet
Blind Image DeblurringGoProPSNR35.61DLEFNet
Blind Image DeblurringGoProSSIM0.973DLEFNet

Related Papers

Generative Latent Kernel Modeling for Blind Motion Deblurring2025-07-12EAMamba: Efficient All-Around Vision State Space Model for Image Restoration2025-06-27Dynamic Bandwidth Allocation for Hybrid Event-RGB Transmission2025-06-25Visual-Instructed Degradation Diffusion for All-in-One Image Restoration2025-06-20R3eVision: A Survey on Robust Rendering, Restoration, and Enhancement for 3D Low-Level Vision2025-06-19Unsupervised Imaging Inverse Problems with Diffusion Distribution Matching2025-06-17Restoring Gaussian Blurred Face Images for Deanonymization Attacks2025-06-14Plug-and-Play Linear Attention for Pre-trained Image and Video Restoration Models2025-06-10