TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Stereo Magnification with Multi-Layer Images

Stereo Magnification with Multi-Layer Images

Taras Khakhulin, Denis Korzhenkov, Pavel Solovev, Gleb Sterkin, Timotei Ardelean, Victor Lempitsky

2022-01-13CVPR 2022 1Novel View SynthesisGeneralizable Novel View Synthesis
PaperPDF

Abstract

Representing scenes with multiple semi-transparent colored layers has been a popular and successful choice for real-time novel view synthesis. Existing approaches infer colors and transparency values over regularly-spaced layers of planar or spherical shape. In this work, we introduce a new view synthesis approach based on multiple semi-transparent layers with scene-adapted geometry. Our approach infers such representations from stereo pairs in two stages. The first stage infers the geometry of a small number of data-adaptive layers from a given pair of views. The second stage infers the color and the transparency values for these layers producing the final representation for novel view synthesis. Importantly, both stages are connected through a differentiable renderer and are trained in an end-to-end manner. In the experiments, we demonstrate the advantage of the proposed approach over the use of regularly-spaced layers with no adaptation to scene geometry. Despite being orders of magnitude faster during rendering, our approach also outperforms a recently proposed IBRNet system based on implicit geometry representation. See results at https://samsunglabs.github.io/StereoLayers .

Results

TaskDatasetMetricValueModel
Novel View SynthesisSWORDLPIPS0.113StereoLayers (8 layers)
Novel View SynthesisSWORDPSNR25.54StereoLayers (8 layers)
Novel View SynthesisSWORDSSIM0.79StereoLayers (8 layers)
Novel View SynthesisSWORDLPIPS0.102StereoLayers (2 layers)
Novel View SynthesisSWORDPSNR25.28StereoLayers (2 layers)
Novel View SynthesisSWORDSSIM0.78StereoLayers (2 layers)
Novel View SynthesisSWORDLPIPS0.096StereoLayers
Novel View SynthesisSWORDPSNR25.95StereoLayers
Novel View SynthesisSWORDSSIM0.81StereoLayers

Related Papers

Physically Based Neural LiDAR Resimulation2025-07-15MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second2025-07-14Cameras as Relative Positional Encoding2025-07-14LighthouseGS: Indoor Structure-aware 3D Gaussian Splatting for Panorama-Style Mobile Captures2025-07-08Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering2025-07-08Outdoor Monocular SLAM with Global Scale-Consistent 3D Gaussian Pointmaps2025-07-04Refine Any Object in Any Scene2025-06-30VoteSplat: Hough Voting Gaussian Splatting for 3D Scene Understanding2025-06-28