TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Editing Conditional Radiance Fields

Editing Conditional Radiance Fields

Steven Liu, Xiuming Zhang, Zhoutong Zhang, Richard Zhang, Jun-Yan Zhu, Bryan Russell

2021-05-13ICCV 2021 10Novel View Synthesis
PaperPDFCode(official)

Abstract

A neural radiance field (NeRF) is a scene model supporting high-quality view synthesis, optimized per scene. In this paper, we explore enabling user editing of a category-level NeRF - also known as a conditional radiance field - trained on a shape category. Specifically, we introduce a method for propagating coarse 2D user scribbles to the 3D space, to modify the color or shape of a local region. First, we propose a conditional radiance field that incorporates new modular network components, including a shape branch that is shared across object instances. Observing multiple instances of the same category, our model learns underlying part semantics without any supervision, thereby allowing the propagation of coarse 2D user scribbles to the entire 3D region (e.g., chair seat). Next, we propose a hybrid network update strategy that targets specific network components, which balances efficiency and accuracy. During user interaction, we formulate an optimization problem that both satisfies the user's constraints and preserves the original object structure. We demonstrate our approach on various editing tasks over three shape datasets and show that it outperforms prior neural editing approaches. Finally, we edit the appearance and shape of a real photograph and show that the edit propagates to extrapolated novel views.

Results

TaskDatasetMetricValueModel
Novel View SynthesisDosovitskiy ChairsLPIPS0.141Single NeRF + Share./Inst. Net
Novel View SynthesisDosovitskiy ChairsPSNR21.78Single NeRF + Share./Inst. Net
Novel View SynthesisPhotoShapeLPIPS0.022Single NeRF + Share./Inst. Net
Novel View SynthesisPhotoShapePSNR37.67Single NeRF + Share./Inst. Net

Related Papers

Physically Based Neural LiDAR Resimulation2025-07-15MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second2025-07-14Cameras as Relative Positional Encoding2025-07-14LighthouseGS: Indoor Structure-aware 3D Gaussian Splatting for Panorama-Style Mobile Captures2025-07-08Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering2025-07-08Outdoor Monocular SLAM with Global Scale-Consistent 3D Gaussian Pointmaps2025-07-04Refine Any Object in Any Scene2025-06-30VoteSplat: Hough Voting Gaussian Splatting for 3D Scene Understanding2025-06-28