TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Make Explicit Calibration Implicit: Calibrate Denoiser Ins...

Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model

Xin Jin, Jia-Wen Xiao, Ling-Hao Han, Chunle Guo, Xialei Liu, Chongyi Li, Ming-Ming Cheng

2023-08-07ICCV 2023 1DenoisingImage Denoising
PaperPDFCode(official)

Abstract

Explicit calibration-based methods have dominated RAW image denoising under extremely low-light environments. However, these methods are impeded by several critical limitations: a) the explicit calibration process is both labor- and time-intensive, b) challenge exists in transferring denoisers across different camera models, and c) the disparity between synthetic and real noise is exacerbated by digital gain. To address these issues, we introduce a groundbreaking pipeline named Lighting Every Darkness (LED), which is effective regardless of the digital gain or the camera sensor. LED eliminates the need for explicit noise model calibration, instead utilizing an implicit fine-tuning process that allows quick deployment and requires minimal data. Structural modifications are also included to reduce the discrepancy between synthetic and real noise without extra computational demands. Our method surpasses existing methods in various camera models, including new ones not in public datasets, with just a few pairs per digital gain and only 0.5% of the typical iterations. Furthermore, LED also allows researchers to focus more on deep learning advancements while still utilizing sensor engineering benefits. Code and related materials can be found in https://srameo.github.io/projects/led-iccv23/ .

Results

TaskDatasetMetricValueModel
DenoisingSID SonyA7S2 x250PSNR (Raw)39.34LED
DenoisingSID SonyA7S2 x250SSIM (Raw)0.932LED
DenoisingSID SonyA7S2 x300PSNR (Raw)36.67LED
DenoisingSID SonyA7S2 x300SSIM (Raw)0.915LED
DenoisingSID SonyA7S2 x100PSNR (Raw)41.98LED
DenoisingSID SonyA7S2 x100SSIM (Raw)0.954LED
Image DenoisingSID SonyA7S2 x250PSNR (Raw)39.34LED
Image DenoisingSID SonyA7S2 x250SSIM (Raw)0.932LED
Image DenoisingSID SonyA7S2 x300PSNR (Raw)36.67LED
Image DenoisingSID SonyA7S2 x300SSIM (Raw)0.915LED
Image DenoisingSID SonyA7S2 x100PSNR (Raw)41.98LED
Image DenoisingSID SonyA7S2 x100SSIM (Raw)0.954LED
3D ArchitectureSID SonyA7S2 x250PSNR (Raw)39.34LED
3D ArchitectureSID SonyA7S2 x250SSIM (Raw)0.932LED
3D ArchitectureSID SonyA7S2 x300PSNR (Raw)36.67LED
3D ArchitectureSID SonyA7S2 x300SSIM (Raw)0.915LED
3D ArchitectureSID SonyA7S2 x100PSNR (Raw)41.98LED
3D ArchitectureSID SonyA7S2 x100SSIM (Raw)0.954LED

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16HUG-VAS: A Hierarchical NURBS-Based Generative Model for Aortic Geometry Synthesis and Controllable Editing2025-07-15AirLLM: Diffusion Policy-based Adaptive LoRA for Remote Fine-Tuning of LLM over the Air2025-07-15A statistical physics framework for optimal learning2025-07-10LangMamba: A Language-driven Mamba Framework for Low-dose CT Denoising with Vision-language Models2025-07-08Unconditional Diffusion for Generative Sequential Recommendation2025-07-08