TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/3DGEER: Exact and Efficient Volumetric Rendering with 3D G...

3DGEER: Exact and Efficient Volumetric Rendering with 3D Gaussians

Zixun Huang, Cho-Ying Wu, Yuliang Guo, Xinyu Huang, Liu Ren

2025-05-29Neural RenderingNovel View Synthesis3DGS
PaperPDFCode(official)

Abstract

3D Gaussian Splatting (3DGS) marks a significant milestone in balancing the quality and efficiency of differentiable rendering. However, its high efficiency stems from an approximation of projecting 3D Gaussians onto the image plane as 2D Gaussians, which inherently limits rendering quality--particularly under large Field-of-View (FoV) camera inputs. While several recent works have extended 3DGS to mitigate these approximation errors, none have successfully achieved both exactness and high efficiency simultaneously. In this work, we introduce 3DGEER, an Exact and Efficient Volumetric Gaussian Rendering method. Starting from first principles, we derive a closed-form expression for the density integral along a ray traversing a 3D Gaussian distribution. This formulation enables precise forward rendering with arbitrary camera models and supports gradient-based optimization of 3D Gaussian parameters. To ensure both exactness and real-time performance, we propose an efficient method for computing a tight Particle Bounding Frustum (PBF) for each 3D Gaussian, enabling accurate and efficient ray-Gaussian association. We also introduce a novel Bipolar Equiangular Projection (BEAP) representation to accelerate ray association under generic camera models. BEAP further provides a more uniform ray sampling strategy to apply supervision, which empirically improves reconstruction quality. Experiments on multiple pinhole and fisheye datasets show that our method consistently outperforms prior methods, establishing a new state-of-the-art in real-time neural rendering.

Results

TaskDatasetMetricValueModel
Novel View SynthesisMip-NeRF 360LPIPS0.213DGEER
Novel View SynthesisMip-NeRF 360PSNR27.763DGEER
Novel View SynthesisMip-NeRF 360SSIM0.8213DGEER
Novel View SynthesisScanNet++LPIPS0.1263DGEER
Novel View SynthesisScanNet++PSNR31.53DGEER
Novel View SynthesisScanNet++SSIM0.9533DGEER

Related Papers

SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Physically Based Neural LiDAR Resimulation2025-07-15MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second2025-07-14Cameras as Relative Positional Encoding2025-07-143DGAA: Realistic and Robust 3D Gaussian-based Adversarial Attack for Autonomous Driving2025-07-14LighthouseGS: Indoor Structure-aware 3D Gaussian Splatting for Panorama-Style Mobile Captures2025-07-08Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering2025-07-083DGS_LSR:Large_Scale Relocation for Autonomous Driving Based on 3D Gaussian Splatting2025-07-08