TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ExposureDiffusion: Learning to Expose for Low-light Image ...

ExposureDiffusion: Learning to Expose for Low-light Image Enhancement

YuFei Wang, Yi Yu, Wenhan Yang, Lanqing Guo, Lap-Pui Chau, Alex C. Kot, Bihan Wen

2023-07-15ICCV 2023 1DenoisingImage DenoisingImage EnhancementLow-Light Image Enhancement
PaperPDFCode(official)

Abstract

Previous raw image-based low-light image enhancement methods predominantly relied on feed-forward neural networks to learn deterministic mappings from low-light to normally-exposed images. However, they failed to capture critical distribution information, leading to visually undesirable results. This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model. Different from a vanilla diffusion model that has to perform Gaussian denoising, with the injected physics-based exposure model, our restoration process can directly start from a noisy image instead of pure noise. As such, our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models. To make full use of the advantages of different intermediate steps, we further propose an adaptive residual layer that effectively screens out the side-effect in the iterative refinement when the intermediate results have been already well-exposed. The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks. Note that, the proposed framework is compatible with real-paired datasets, real/synthetic noise models, and different backbone networks. We evaluate the proposed method on various public benchmarks, achieving promising results with consistent improvements using different exposure models and backbones. Besides, the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted.

Results

TaskDatasetMetricValueModel
DenoisingELD SonyA7S2 x200PSNR (Raw)40.39ExposureDiffusion (UNet+ELD)
DenoisingImage Denoising on SID x300PSNR (Raw)36.82ExposureDiffusion (UNet+paired data)
Image DenoisingELD SonyA7S2 x200PSNR (Raw)40.39ExposureDiffusion (UNet+ELD)
Image DenoisingImage Denoising on SID x300PSNR (Raw)36.82ExposureDiffusion (UNet+paired data)
3D ArchitectureELD SonyA7S2 x200PSNR (Raw)40.39ExposureDiffusion (UNet+ELD)
3D ArchitectureImage Denoising on SID x300PSNR (Raw)36.82ExposureDiffusion (UNet+paired data)

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16HUG-VAS: A Hierarchical NURBS-Based Generative Model for Aortic Geometry Synthesis and Controllable Editing2025-07-15AirLLM: Diffusion Policy-based Adaptive LoRA for Remote Fine-Tuning of LLM over the Air2025-07-15A statistical physics framework for optimal learning2025-07-10HVI-CIDNet+: Beyond Extreme Darkness for Low-Light Image Enhancement2025-07-09LangMamba: A Language-driven Mamba Framework for Low-dose CT Denoising with Vision-language Models2025-07-08