TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Bayesian Image Reconstruction using Deep Generative Models

Bayesian Image Reconstruction using Deep Generative Models

Razvan V Marinescu, Daniel Moyer, Polina Golland

2020-12-08Super-ResolutionImage DenoisingImage ReconstructionImage Super-ResolutionImage InpaintingImage RestorationVariational Inference
PaperPDFCode(official)Code(official)

Abstract

Machine learning models are commonly trained end-to-end and in a supervised setting, using paired (input, output) data. Examples include recent super-resolution methods that train on pairs of (low-resolution, high-resolution) images. However, these end-to-end approaches require re-training every time there is a distribution shift in the inputs (e.g., night images vs daylight) or relevant latent variables (e.g., camera blur or hand motion). In this work, we leverage state-of-the-art (SOTA) generative models (here StyleGAN2) for building powerful image priors, which enable application of Bayes' theorem for many downstream reconstruction tasks. Our method, Bayesian Reconstruction through Generative Models (BRGM), uses a single pre-trained generator model to solve different image restoration tasks, i.e., super-resolution and in-painting, by combining it with different forward corruption models. We keep the weights of the generator model fixed, and reconstruct the image by estimating the Bayesian maximum a-posteriori (MAP) estimate over the input latent vector that generated the reconstructed image. We further use variational inference to approximate the posterior distribution over the latent vectors, from which we sample multiple solutions. We demonstrate BRGM on three large and diverse datasets: (i) 60,000 images from the Flick Faces High Quality dataset (ii) 240,000 chest X-rays from MIMIC III and (iii) a combined collection of 5 brain MRI datasets with 7,329 scans. Across all three datasets and without any dataset-specific hyperparameter tuning, our simple approach yields performance competitive with current task-specific state-of-the-art methods on super-resolution and in-painting, while being more generalisable and without requiring any training. Our source code and pre-trained models are available online: https://razvanmarinescu.github.io/brgm/.

Results

TaskDatasetMetricValueModel
Super-ResolutionFFHQ 256 x 256 - 4x upscalingPSNR24.16BRGM
Super-ResolutionFFHQ 256 x 256 - 4x upscalingSSIM0.7BRGM
Image GenerationFFHQ 1024 x 1024LPIPS0.19BRGM
Image GenerationFFHQ 1024 x 1024PSNR21.33BRGM
Image GenerationFFHQ 1024 x 1024RMSE24.28BRGM
Image GenerationFFHQ 1024 x 1024SSIM0.84BRGM
Image GenerationFFHQ 1024 x 1024LPIPS0.24SN-PatchGAN
Image GenerationFFHQ 1024 x 1024PSNR19.67SN-PatchGAN
Image GenerationFFHQ 1024 x 1024RMSE30.75SN-PatchGAN
Image GenerationFFHQ 1024 x 1024SSIM0.82SN-PatchGAN
Image InpaintingFFHQ 1024 x 1024LPIPS0.19BRGM
Image InpaintingFFHQ 1024 x 1024PSNR21.33BRGM
Image InpaintingFFHQ 1024 x 1024RMSE24.28BRGM
Image InpaintingFFHQ 1024 x 1024SSIM0.84BRGM
Image InpaintingFFHQ 1024 x 1024LPIPS0.24SN-PatchGAN
Image InpaintingFFHQ 1024 x 1024PSNR19.67SN-PatchGAN
Image InpaintingFFHQ 1024 x 1024RMSE30.75SN-PatchGAN
Image InpaintingFFHQ 1024 x 1024SSIM0.82SN-PatchGAN
DenoisingFFHQ 64x64 - 4x upscalingLPIPS0.24BRGM
DenoisingFFHQLPIPS0.24BRGM
Image Super-ResolutionFFHQ 256 x 256 - 4x upscalingPSNR24.16BRGM
Image Super-ResolutionFFHQ 256 x 256 - 4x upscalingSSIM0.7BRGM
Image DenoisingFFHQ 64x64 - 4x upscalingLPIPS0.24BRGM
Image DenoisingFFHQLPIPS0.24BRGM
3D ArchitectureFFHQ 64x64 - 4x upscalingLPIPS0.24BRGM
3D ArchitectureFFHQLPIPS0.24BRGM
3D Object Super-ResolutionFFHQ 256 x 256 - 4x upscalingPSNR24.16BRGM
3D Object Super-ResolutionFFHQ 256 x 256 - 4x upscalingSSIM0.7BRGM
16kFFHQ 256 x 256 - 4x upscalingPSNR24.16BRGM
16kFFHQ 256 x 256 - 4x upscalingSSIM0.7BRGM

Related Papers

SpectraLift: Physics-Guided Spectral-Inversion Network for Self-Supervised Hyperspectral Image Super-Resolution2025-07-17Unsupervised Part Discovery via Descriptor-Based Masked Image Restoration with Optimized Constraints2025-07-16The model is the message: Lightweight convolutional autoencoders applied to noisy imaging data for planetary science and astrobiology2025-07-153D Magnetic Inverse Routine for Single-Segment Magnetic Field Images2025-07-15Interpretable Bayesian Tensor Network Kernel Machines with Automatic Rank and Feature Selection2025-07-15IM-LUT: Interpolation Mixing Look-Up Tables for Image Super-Resolution2025-07-14MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization2025-07-14PanoDiff-SR: Synthesizing Dental Panoramic Radiographs using Diffusion and Super-resolution2025-07-12