TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DISN: Deep Implicit Surface Network for High-quality Singl...

DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction

Qiangeng Xu, Weiyue Wang, Duygu Ceylan, Radomir Mech, Ulrich Neumann

2019-05-26NeurIPS 2019 123D ReconstructionSingle-View 3D Reconstruction
PaperPDFCode(official)Code(official)Code(official)

Abstract

Reconstructing 3D shapes from single-view images has been a long-standing research problem. In this paper, we present DISN, a Deep Implicit Surface Network which can generate a high-quality detail-rich 3D mesh from an 2D image by predicting the underlying signed distance fields. In addition to utilizing global image features, DISN predicts the projected location for each 3D point on the 2D image, and extracts local features from the image feature maps. Combining global and local features significantly improves the accuracy of the signed distance field prediction, especially for the detail-rich areas. To the best of our knowledge, DISN is the first method that constantly captures details such as holes and thin structures present in 3D shapes from single-view images. DISN achieves the state-of-the-art single-view reconstruction performance on a variety of shape categories reconstructed from both synthetic and real images. Code is available at https://github.com/xharlie/DISN The supplementary can be found at https://xharlie.github.io/images/neurips_2019_supp.pdf

Results

TaskDatasetMetricValueModel
ReconstructionShapeNetCore3DIoU0.57DISN
ReconstructionShapeNetCore3DIoU0.564OccNet
ReconstructionShapeNetCore3DIoU0.546IMNET
ReconstructionShapeNetCore3DIoU0.4873DN
ReconstructionShapeNetCore3DIoU0.473Pxl2mesh
ReconstructionShapeNetCore3DIoU3AtlasNet
3DShapeNetCore3DIoU0.57DISN
3DShapeNetCore3DIoU0.564OccNet
3DShapeNetCore3DIoU0.546IMNET
3DShapeNetCore3DIoU0.4873DN
3DShapeNetCore3DIoU0.473Pxl2mesh
3DShapeNetCore3DIoU3AtlasNet
Single-View 3D ReconstructionShapeNetCore3DIoU0.57DISN
Single-View 3D ReconstructionShapeNetCore3DIoU0.564OccNet
Single-View 3D ReconstructionShapeNetCore3DIoU0.546IMNET
Single-View 3D ReconstructionShapeNetCore3DIoU0.4873DN
Single-View 3D ReconstructionShapeNetCore3DIoU0.473Pxl2mesh
Single-View 3D ReconstructionShapeNetCore3DIoU3AtlasNet

Related Papers

AutoPartGen: Autogressive 3D Part Generation and Discovery2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16BRUM: Robust 3D Vehicle Reconstruction from 360 Sparse Images2025-07-16Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15Binomial Self-Compensation: Mechanism and Suppression of Motion Error in Phase-Shifting Profilometry2025-07-14An Efficient Approach for Muscle Segmentation and 3D Reconstruction Using Keypoint Tracking in MRI Scan2025-07-11Review of Feed-forward 3D Reconstruction: From DUSt3R to VGGT2025-07-11DreamGrasp: Zero-Shot 3D Multi-Object Reconstruction from Partial-View Images for Robotic Manipulation2025-07-08