TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/NeRFactor: Neural Factorization of Shape and Reflectance U...

NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination

Xiuming Zhang, Pratul P. Srinivasan, Boyang Deng, Paul Debevec, William T. Freeman, Jonathan T. Barron

2021-06-03Surface Normals EstimationSurface ReconstructionDepth PredictionInverse RenderingImage Relighting
PaperPDFCode(official)

Abstract

We address the problem of recovering the shape and spatially-varying reflectance of an object from multi-view images (and their camera poses) of an object illuminated by one unknown lighting condition. This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties. The key to our approach, which we call Neural Radiance Factorization (NeRFactor), is to distill the volumetric geometry of a Neural Radiance Field (NeRF) [Mildenhall et al. 2020] representation of the object into a surface representation and then jointly refine the geometry while solving for the spatially-varying reflectance and environment lighting. Specifically, NeRFactor recovers 3D neural fields of surface normals, light visibility, albedo, and Bidirectional Reflectance Distribution Functions (BRDFs) without any supervision, using only a re-rendering loss, simple smoothness priors, and a data-driven BRDF prior learned from real-world BRDF measurements. By explicitly modeling light visibility, NeRFactor is able to separate shadows from albedo and synthesize realistic soft or hard shadows under arbitrary lighting conditions. NeRFactor is able to recover convincing 3D models for free-viewpoint relighting in this challenging and underconstrained capture setup for both synthetic and real scenes. Qualitative and quantitative experiments show that NeRFactor outperforms classic and deep learning-based state of the art across various tasks. Our videos, code, and data are available at people.csail.mit.edu/xiuming/projects/nerfactor/.

Results

TaskDatasetMetricValueModel
Image EnhancementStanford-ORBHDR-PSNR23.54NeRFactor
Image EnhancementStanford-ORBLPIPS0.048NeRFactor
Image EnhancementStanford-ORBSSIM0.969NeRFactor
Surface Normals EstimationStanford-ORBCosine Distance0.29NeRFactor
Inverse RenderingStanford-ORBHDR-PSNR23.54NeRFactor

Related Papers

A Mixed-Primitive-based Gaussian Splatting Method for Surface Reconstruction2025-07-15MonoMVSNet: Monocular Priors Guided Multi-View Stereo Network2025-07-15High-Fidelity and Generalizable Neural Surface Reconstruction with Sparse Feature Volumes2025-07-08Beyond Appearance: Geometric Cues for Robust Video Instance Segmentation2025-07-08LoomNet: Enhancing Multi-View Image Generation via Latent Space Weaving2025-07-07HiNeuS: High-fidelity Neural Surface Mitigating Low-texture and Reflective Ambiguity2025-06-30RoboScape: Physics-informed Embodied World Model2025-06-29SOF: Sorted Opacity Fields for Fast Unbounded Surface Reconstruction2025-06-23