TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Wonder3D: Single Image to 3D using Cross-Domain Diffusion

Wonder3D: Single Image to 3D using Cross-Domain Diffusion

Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, YuAn Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, Wenping Wang

2023-10-23CVPR 2024 13D geometryImage to 3DSingle-View 3D Reconstruction
PaperPDFCode

Abstract

In this work, we introduce Wonder3D, a novel method for efficiently generating high-fidelity textured meshes from single-view images.Recent methods based on Score Distillation Sampling (SDS) have shown the potential to recover 3D geometry from 2D diffusion priors, but they typically suffer from time-consuming per-shape optimization and inconsistent geometry. In contrast, certain works directly produce 3D information via fast network inferences, but their results are often of low quality and lack geometric details. To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model that generates multi-view normal maps and the corresponding color images. To ensure consistency, we employ a multi-view cross-domain attention mechanism that facilitates information exchange across views and modalities. Lastly, we introduce a geometry-aware normal fusion algorithm that extracts high-quality surfaces from the multi-view 2D representations. Our extensive evaluations demonstrate that our method achieves high-quality reconstruction results, robust generalization, and reasonably good efficiency compared to prior works.

Results

TaskDatasetMetricValueModel
ReconstructionGSOChamfer Distance0.0199Wonder3D
ReconstructionGSOIoU62.44Wonder3D
3DGSOChamfer Distance0.0199Wonder3D
3DGSOIoU62.44Wonder3D
Single-View 3D ReconstructionGSOChamfer Distance0.0199Wonder3D
Single-View 3D ReconstructionGSOIoU62.44Wonder3D

Related Papers

PhysX: Physical-Grounded 3D Asset Generation2025-07-16Robust 3D-Masked Part-level Editing in 3D Gaussian Splatting with Regularized Score Distillation Sampling2025-07-15TRAN-D: 2D Gaussian Splatting-based Sparse-view Transparent Object Depth Reconstruction via Physics Simulation for Scene Update2025-07-15Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion2025-07-08DreamGrasp: Zero-Shot 3D Multi-Object Reconstruction from Partial-View Images for Robotic Manipulation2025-07-08DreamArt: Generating Interactable Articulated Objects from a Single Image2025-07-08RoboScape: Physics-informed Embodied World Model2025-06-29DBMovi-GS: Dynamic View Synthesis from Blurry Monocular Video via Sparse-Controlled Gaussian Splatting2025-06-26