TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/VoxFormer: Sparse Voxel Transformer for Camera-based 3D Se...

VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion

Yiming Li, Zhiding Yu, Christopher Choy, Chaowei Xiao, Jose M. Alvarez, Sanja Fidler, Chen Feng, Anima Anandkumar

2023-02-23CVPR 2023 13D geometryDepth Estimation3D Semantic Scene Completion from a single RGB image3D Semantic Scene Completion
PaperPDFCode(official)

Abstract

Humans can easily imagine the complete 3D geometry of occluded objects and scenes. This appealing ability is vital for recognition and understanding. To enable such capability in AI systems, we propose VoxFormer, a Transformer-based semantic scene completion framework that can output complete 3D volumetric semantics from only 2D images. Our framework adopts a two-stage design where we start from a sparse set of visible and occupied voxel queries from depth estimation, followed by a densification stage that generates dense 3D voxels from the sparse ones. A key idea of this design is that the visual features on 2D images correspond only to the visible scene structures rather than the occluded or empty spaces. Therefore, starting with the featurization and prediction of the visible structures is more reliable. Once we obtain the set of sparse queries, we apply a masked autoencoder design to propagate the information to all the voxels by self-attention. Experiments on SemanticKITTI show that VoxFormer outperforms the state of the art with a relative improvement of 20.0% in geometry and 18.1% in semantics and reduces GPU memory during training to less than 16GB. Our code is available on https://github.com/NVlabs/VoxFormer.

Results

TaskDatasetMetricValueModel
ReconstructionKITTI-360mIoU11.91VoxFormer
ReconstructionSemanticKITTImIoU12.2VoxFormer
3D ReconstructionKITTI-360mIoU11.91VoxFormer
3D ReconstructionKITTI-360mIoU11.91VoxFormer
3D ReconstructionSemanticKITTImIoU12.2VoxFormer
3DKITTI-360mIoU11.91VoxFormer
3DKITTI-360mIoU11.91VoxFormer
3DSemanticKITTImIoU12.2VoxFormer
3D Semantic Scene CompletionKITTI-360mIoU11.91VoxFormer
3D Semantic Scene CompletionKITTI-360mIoU11.91VoxFormer
3D Semantic Scene CompletionSemanticKITTImIoU12.2VoxFormer
3D Scene ReconstructionKITTI-360mIoU11.91VoxFormer
3D Scene ReconstructionSemanticKITTImIoU12.2VoxFormer
Single-View 3D ReconstructionKITTI-360mIoU11.91VoxFormer
Single-View 3D ReconstructionSemanticKITTImIoU12.2VoxFormer

Related Papers

$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Robust 3D-Masked Part-level Editing in 3D Gaussian Splatting with Regularized Score Distillation Sampling2025-07-15TRAN-D: 2D Gaussian Splatting-based Sparse-view Transparent Object Depth Reconstruction via Physics Simulation for Scene Update2025-07-15MonoMVSNet: Monocular Priors Guided Multi-View Stereo Network2025-07-15Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15