TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HDR image reconstruction from a single exposure using deep...

HDR image reconstruction from a single exposure using deep CNNs

Gabriel Eilertsen, Joel Kronander, Gyorgy Denes, Rafał K. Mantiuk, Jonas Unger

2017-10-20Image ReconstructionHDR ReconstructionInverse-Tone-Mapping
PaperPDFCode(official)Code(official)

Abstract

Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.

Results

TaskDatasetMetricValueModel
inverse tone mappingMSU HDR Video Reconstruction BenchmarkHDR-PSNR33.02HDRCNN
inverse tone mappingMSU HDR Video Reconstruction BenchmarkHDR-SSIM0.9663HDRCNN
inverse tone mappingMSU HDR Video Reconstruction BenchmarkHDR-VQM0.1919HDRCNN
Inverse-Tone-MappingMSU HDR Video Reconstruction BenchmarkHDR-PSNR33.02HDRCNN
Inverse-Tone-MappingMSU HDR Video Reconstruction BenchmarkHDR-SSIM0.9663HDRCNN
Inverse-Tone-MappingMSU HDR Video Reconstruction BenchmarkHDR-VQM0.1919HDRCNN

Related Papers

The model is the message: Lightweight convolutional autoencoders applied to noisy imaging data for planetary science and astrobiology2025-07-153D Magnetic Inverse Routine for Single-Segment Magnetic Field Images2025-07-15MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization2025-07-14Vision Foundation Models as Effective Visual Tokenizers for Autoregressive Image Generation2025-07-11LangMamba: A Language-driven Mamba Framework for Low-dose CT Denoising with Vision-language Models2025-07-08Vision Transformer-Based Time-Series Image Reconstruction for Cloud-Filling Applications2025-06-24Cloud-Aware SAR Fusion for Enhanced Optical Sensing in Space Missions2025-06-22Client Selection Strategies for Federated Semantic Communications in Heterogeneous IoT Networks2025-06-20