TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Revisiting Image Deblurring with an Efficient ConvNet

Revisiting Image Deblurring with an Efficient ConvNet

Lingyan Ruan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski, Bin Chen

2023-02-04DeblurringImage Defocus DeblurringImage DeblurringAttribute
PaperPDFCode(official)

Abstract

Image deblurring aims to recover the latent sharp image from its blurry counterpart and has a wide range of applications in computer vision. The Convolution Neural Networks (CNNs) have performed well in this domain for many years, and until recently an alternative network architecture, namely Transformer, has demonstrated even stronger performance. One can attribute its superiority to the multi-head self-attention (MHSA) mechanism, which offers a larger receptive field and better input content adaptability than CNNs. However, as MHSA demands high computational costs that grow quadratically with respect to the input resolution, it becomes impractical for high-resolution image deblurring tasks. In this work, we propose a unified lightweight CNN network that features a large effective receptive field (ERF) and demonstrates comparable or even better performance than Transformers while bearing less computational costs. Our key design is an efficient CNN block dubbed LaKD, equipped with a large kernel depth-wise convolution and spatial-channel mixing structure, attaining comparable or larger ERF than Transformers but with a smaller parameter scale. Specifically, we achieve +0.17dB / +0.43dB PSNR over the state-of-the-art Restormer on defocus / motion deblurring benchmark datasets with 32% fewer parameters and 39% fewer MACs. Extensive experiments demonstrate the superior performance of our network and the effectiveness of each module. Furthermore, we propose a compact and intuitive ERFMeter metric that quantitatively characterizes ERF, and shows a high correlation to the network performance. We hope this work can inspire the research community to further explore the pros and cons of CNN and Transformer architectures beyond image deblurring tasks.

Results

TaskDatasetMetricValueModel
DeblurringRealBlur-R (trained on GoPro)PSNR (sRGB)36.08LaKDNet
DeblurringRealBlur-R (trained on GoPro)SSIM (sRGB)0.955LaKDNet
DeblurringHIDE (trained on GOPRO)PSNR (sRGB)31.58LaKDNet
DeblurringHIDE (trained on GOPRO)Params (M)37.5LaKDNet
DeblurringHIDE (trained on GOPRO)SSIM (sRGB)0.946LaKDNet
2D ClassificationRealBlur-R (trained on GoPro)PSNR (sRGB)36.08LaKDNet
2D ClassificationRealBlur-R (trained on GoPro)SSIM (sRGB)0.955LaKDNet
2D ClassificationHIDE (trained on GOPRO)PSNR (sRGB)31.58LaKDNet
2D ClassificationHIDE (trained on GOPRO)Params (M)37.5LaKDNet
2D ClassificationHIDE (trained on GOPRO)SSIM (sRGB)0.946LaKDNet
Image DeblurringGoProPSNR33.72LAKDNet
Image DeblurringGoProParams (M)37.5LAKDNet
Image DeblurringGoProSSIM0.967LAKDNet
10-shot image generationRealBlur-R (trained on GoPro)PSNR (sRGB)36.08LaKDNet
10-shot image generationRealBlur-R (trained on GoPro)SSIM (sRGB)0.955LaKDNet
10-shot image generationHIDE (trained on GOPRO)PSNR (sRGB)31.58LaKDNet
10-shot image generationHIDE (trained on GOPRO)Params (M)37.5LaKDNet
10-shot image generationHIDE (trained on GOPRO)SSIM (sRGB)0.946LaKDNet
10-shot image generationGoProPSNR33.72LAKDNet
10-shot image generationGoProParams (M)37.5LAKDNet
10-shot image generationGoProSSIM0.967LAKDNet
1 Image, 2*2 StitchiGoProPSNR33.72LAKDNet
1 Image, 2*2 StitchiGoProParams (M)37.5LAKDNet
1 Image, 2*2 StitchiGoProSSIM0.967LAKDNet
16kGoProPSNR33.72LAKDNet
16kGoProParams (M)37.5LAKDNet
16kGoProSSIM0.967LAKDNet
Blind Image DeblurringRealBlur-R (trained on GoPro)PSNR (sRGB)36.08LaKDNet
Blind Image DeblurringRealBlur-R (trained on GoPro)SSIM (sRGB)0.955LaKDNet
Blind Image DeblurringHIDE (trained on GOPRO)PSNR (sRGB)31.58LaKDNet
Blind Image DeblurringHIDE (trained on GOPRO)Params (M)37.5LaKDNet
Blind Image DeblurringHIDE (trained on GOPRO)SSIM (sRGB)0.946LaKDNet

Related Papers

MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15COLIBRI Fuzzy Model: Color Linguistic-Based Representation and Interpretation2025-07-15Ref-Long: Benchmarking the Long-context Referencing Capability of Long-context Language Models2025-07-13Generative Latent Kernel Modeling for Blind Motion Deblurring2025-07-12Model Parallelism With Subnetwork Data Parallelism2025-07-11Bradley-Terry and Multi-Objective Reward Modeling Are Complementary2025-07-10