TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Physics-based Noise Modeling for Extreme Low-light Photogr...

Physics-based Noise Modeling for Extreme Low-light Photography

Kaixuan Wei, Ying Fu, Yinqiang Zheng, Jiaolong Yang

2021-08-04DenoisingImage Denoising
PaperPDFCodeCode

Abstract

Enhancing the visibility in extreme low-light environments is a challenging task. Under nearly lightless condition, existing image denoising methods could easily break down due to significantly low SNR. In this paper, we systematically study the noise statistics in the imaging pipeline of CMOS photosensors, and formulate a comprehensive noise model that can accurately characterize the real noise structures. Our novel model considers the noise sources caused by digital camera electronics which are largely overlooked by existing methods yet have significant influence on raw measurement in the dark. It provides a way to decouple the intricate noise structure into different statistical distributions with physical interpretations. Moreover, our noise model can be used to synthesize realistic training data for learning-based low-light denoising algorithms. In this regard, although promising results have been shown recently with deep convolutional neural networks, the success heavily depends on abundant noisy clean image pairs for training, which are tremendously difficult to obtain in practice. Generalizing their trained models to images from new devices is also problematic. Extensive experiments on multiple low-light denoising datasets -- including a newly collected one in this work covering various devices -- show that a deep neural network trained with our proposed noise formation model can reach surprisingly-high accuracy. The results are on par with or sometimes even outperform training with paired real data, opening a new door to real-world extreme low-light photography.

Results

TaskDatasetMetricValueModel
DenoisingSID SonyA7S2 x250PSNR (Raw)39.44ELD
DenoisingSID SonyA7S2 x250SSIM (Raw)0.931ELD
DenoisingSID x100PSNR (Raw)41.95ELD
DenoisingSID x100SSIM0.963ELD
DenoisingSID x300PSNR (Raw)36.36ELD
DenoisingSID x300SSIM0.911ELD
DenoisingSID SonyA7S2 x100PSNR (Raw)41.95ELD
DenoisingSID SonyA7S2 x100SSIM (Raw)0.953ELD
Image DenoisingSID SonyA7S2 x250PSNR (Raw)39.44ELD
Image DenoisingSID SonyA7S2 x250SSIM (Raw)0.931ELD
Image DenoisingSID x100PSNR (Raw)41.95ELD
Image DenoisingSID x100SSIM0.963ELD
Image DenoisingSID x300PSNR (Raw)36.36ELD
Image DenoisingSID x300SSIM0.911ELD
Image DenoisingSID SonyA7S2 x100PSNR (Raw)41.95ELD
Image DenoisingSID SonyA7S2 x100SSIM (Raw)0.953ELD
3D ArchitectureSID SonyA7S2 x250PSNR (Raw)39.44ELD
3D ArchitectureSID SonyA7S2 x250SSIM (Raw)0.931ELD
3D ArchitectureSID x100PSNR (Raw)41.95ELD
3D ArchitectureSID x100SSIM0.963ELD
3D ArchitectureSID x300PSNR (Raw)36.36ELD
3D ArchitectureSID x300SSIM0.911ELD
3D ArchitectureSID SonyA7S2 x100PSNR (Raw)41.95ELD
3D ArchitectureSID SonyA7S2 x100SSIM (Raw)0.953ELD

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16HUG-VAS: A Hierarchical NURBS-Based Generative Model for Aortic Geometry Synthesis and Controllable Editing2025-07-15AirLLM: Diffusion Policy-based Adaptive LoRA for Remote Fine-Tuning of LLM over the Air2025-07-15A statistical physics framework for optimal learning2025-07-10LangMamba: A Language-driven Mamba Framework for Low-dose CT Denoising with Vision-language Models2025-07-08Unconditional Diffusion for Generative Sequential Recommendation2025-07-08