TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/3D ShapeNets: A Deep Representation for Volumetric Shapes

3D ShapeNets: A Deep Representation for Volumetric Shapes

Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, Jianxiong Xiao

2014-06-22CVPR 2015 63D Shape RepresentationObject Recognition3D Point Cloud Classification
PaperPDFCodeCodeCode

Abstract

3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.

Results

TaskDatasetMetricValueModel
Shape Representation Of 3D Point CloudsModelNet40Mean Accuracy77.33DShapeNets
3D Point Cloud ClassificationModelNet40Mean Accuracy77.33DShapeNets
3D Point Cloud ReconstructionModelNet40Mean Accuracy77.33DShapeNets

Related Papers

GeoMag: A Vision-Language Model for Pixel-level Fine-Grained Remote Sensing Image Parsing2025-07-08Out-of-distribution detection in 3D applications: a review2025-07-01Asymmetric Dual Self-Distillation for 3D Self-Supervised Representation Learning2025-06-26SASep: Saliency-Aware Structured Separation of Geometry and Feature for Open Set Learning on Point Clouds2025-06-16Continual Hyperbolic Learning of Instances and Classes2025-06-12DCIRNet: Depth Completion with Iterative Refinement for Dexterous Grasping of Transparent and Reflective Objects2025-06-11Aligning Text, Images, and 3D Structure Token-by-Token2025-06-09STSBench: A Spatio-temporal Scenario Benchmark for Multi-modal Large Language Models in Autonomous Driving2025-06-06