TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Soft Rasterizer: A Differentiable Renderer for Image-based...

Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning

Shichen Liu, Tianye Li, Weikai Chen, Hao Li

2019-04-03ICCV 2019 103D Object ReconstructionSingle-View 3D Reconstruction
PaperPDFCode(official)Code

Abstract

Rendering bridges the gap between 2D vision and 3D scenes by simulating the physical process of image formation. By inverting such renderer, one can think of a learning approach to infer 3D information from 2D images. However, standard graphics renderers involve a fundamental discretization step called rasterization, which prevents the rendering process to be differentiable, hence able to be learned. Unlike the state-of-the-art differentiable renderers, which only approximate the rendering gradient in the back propagation, we propose a truly differentiable rendering framework that is able to (1) directly render colorized mesh using differentiable functions and (2) back-propagate efficient supervision signals to mesh vertices and their attributes from various forms of image representations, including silhouette, shading and color images. The key to our framework is a novel formulation that views rendering as an aggregation function that fuses the probabilistic contributions of all mesh triangles with respect to the rendered pixels. Such formulation enables our framework to flow gradients to the occluded and far-range vertices, which cannot be achieved by the previous state-of-the-arts. We show that by using the proposed renderer, one can achieve significant improvement in 3D unsupervised single-view reconstruction both qualitatively and quantitatively. Experiments also demonstrate that our approach is able to handle the challenging tasks in image-based shape fitting, which remain nontrivial to existing differentiable renderers.

Results

TaskDatasetMetricValueModel
ReconstructionShapeNet3DIoU0.6464SoftRas (full)
ReconstructionShapeNet3DIoU0.6015NMR [19]
ReconstructionShapeNet3DIoU0.5736voxel [47]
ReconstructionShapeNet3DIoU0.4766retrieval [47]
Object ReconstructionShapeNet3DIoU0.6464SoftRas (full)
3DShapeNet3DIoU0.6464SoftRas (full)
3DShapeNet3DIoU0.6015NMR [19]
3DShapeNet3DIoU0.5736voxel [47]
3DShapeNet3DIoU0.4766retrieval [47]
3D Object ReconstructionShapeNet3DIoU0.6464SoftRas (full)
Single-View 3D ReconstructionShapeNet3DIoU0.6464SoftRas (full)
Single-View 3D ReconstructionShapeNet3DIoU0.6015NMR [19]
Single-View 3D ReconstructionShapeNet3DIoU0.5736voxel [47]
Single-View 3D ReconstructionShapeNet3DIoU0.4766retrieval [47]

Related Papers

ViT-NeBLa: A Hybrid Vision Transformer and Neural Beer-Lambert Framework for Single-View 3D Reconstruction of Oral Anatomy from Panoramic Radiographs2025-06-16HuSc3D: Human Sculpture dataset for 3D object reconstruction2025-06-09Object-X: Learning to Reconstruct Multi-Modal 3D Object Representations2025-06-05SR3D: Unleashing Single-view 3D Reconstruction for Transparent and Specular Object Grasping2025-05-30Direct3D-S2: Gigascale 3D Generation Made Easy with Spatial Sparse Attention2025-05-23ACT-R: Adaptive Camera Trajectories for Single View 3D Reconstruction2025-05-13TransparentGS: Fast Inverse Rendering of Transparent Objects with Gaussians2025-04-263D Object Reconstruction with mmWave Radars2025-04-15