TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unique3D: High-Quality and Efficient 3D Mesh Generation fr...

Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image

Kailu Wu, Fangfu Liu, Zhihan Cai, Runjie Yan, HanYang Wang, Yating Hu, Yueqi Duan, Kaisheng Ma

2024-05-30Single-View 3D Reconstruction on ShapeNetImage to 3DSingle-View 3D Reconstruction
PaperPDFCode(official)

Abstract

In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability. Previous methods based on Score Distillation Sampling (SDS) can produce diversified 3D results by distilling 3D knowledge from large 2D diffusion models, but they usually suffer from long per-case optimization time with inconsistent issues. Recent works address the problem and generate better 3D results either by finetuning a multi-view diffusion model or training a fast feed-forward model. However, they still lack intricate textures and complex geometries due to inconsistency and limited generated resolution. To simultaneously achieve high fidelity, consistency, and efficiency in single image-to-3D, we propose a novel framework Unique3D that includes a multi-view diffusion model with a corresponding normal diffusion model to generate multi-view images with their normal maps, a multi-level upscale process to progressively improve the resolution of generated orthographic multi-views, as well as an instant and consistent mesh reconstruction algorithm called ISOMER, which fully integrates the color and geometric priors into mesh results. Extensive experiments demonstrate that our Unique3D significantly outperforms other image-to-3D baselines in terms of geometric and textural details.

Results

TaskDatasetMetricValueModel
ReconstructionGSOChamfer Distance0.0145Unique3D
ReconstructionGSOIoU55.38Unique3D
3DGSOChamfer Distance0.0145Unique3D
3DGSOIoU55.38Unique3D
Single-View 3D ReconstructionGSOChamfer Distance0.0145Unique3D
Single-View 3D ReconstructionGSOIoU55.38Unique3D

Related Papers

PhysX: Physical-Grounded 3D Asset Generation2025-07-16DreamArt: Generating Interactable Articulated Objects from a Single Image2025-07-08DreamJourney: Perpetual View Generation with Video Diffusion Models2025-06-21ViT-NeBLa: A Hybrid Vision Transformer and Neural Beer-Lambert Framework for Single-View 3D Reconstruction of Oral Anatomy from Panoramic Radiographs2025-06-16EmbodiedGen: Towards a Generative 3D World Engine for Embodied Intelligence2025-06-12UA-Pose: Uncertainty-Aware 6D Object Pose Estimation and Online Object Completion with Partial References2025-06-09AdaHuman: Animatable Detailed 3D Human Generation with Compositional Multiview Diffusion2025-05-30SR3D: Unleashing Single-view 3D Reconstruction for Transparent and Specular Object Grasping2025-05-30