TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HyperNeRF: A Higher-Dimensional Representation for Topolog...

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields

Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, Steven M. Seitz

2021-06-24Novel View SynthesisDynamic Reconstruction
PaperPDFCodeCode

Abstract

Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprecedented fidelity, and various recent works have extended NeRF to handle dynamic scenes. A common approach to reconstruct such non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space. However, these deformation-based approaches struggle to model changes in topology, as topological changes require a discontinuity in the deformation field, but these deformation fields are necessarily continuous. We address this limitation by lifting NeRFs into a higher dimensional space, and by representing the 5D radiance field corresponding to each individual input image as a slice through this "hyper-space". Our method is inspired by level set methods, which model the evolution of surfaces as slices through a higher dimensional surface. We evaluate our method on two tasks: (i) interpolating smoothly between "moments", i.e., configurations of the scene, seen in the input images while maintaining visual plausibility, and (ii) novel-view synthesis at fixed moments. We show that our method, which we dub HyperNeRF, outperforms existing methods on both tasks. Compared to Nerfies, HyperNeRF reduces average error rates by 4.1% for interpolation and 8.6% for novel-view synthesis, as measured by LPIPS. Additional videos, results, and visualizations are available at https://hypernerf.github.io.

Results

TaskDatasetMetricValueModel
Dynamic ReconstructioniPhone (Monocular Dynamic View Synthesis)LPIPS0.51HyperNeRF

Related Papers

Physically Based Neural LiDAR Resimulation2025-07-15MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second2025-07-14Cameras as Relative Positional Encoding2025-07-14LighthouseGS: Indoor Structure-aware 3D Gaussian Splatting for Panorama-Style Mobile Captures2025-07-08Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering2025-07-08Outdoor Monocular SLAM with Global Scale-Consistent 3D Gaussian Pointmaps2025-07-04Refine Any Object in Any Scene2025-06-30VoteSplat: Hough Voting Gaussian Splatting for 3D Scene Understanding2025-06-28