TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/3D Sketch-aware Semantic Scene Completion via Semi-supervi...

3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure Prior

Xiaokang Chen, Kwan-Yee Lin, Chen Qian, Gang Zeng, Hongsheng Li

2020-03-31CVPR 2020 6Hallucination3D Semantic Scene Completion from a single RGB image3D Semantic Scene Completion
PaperPDFCodeCode

Abstract

The goal of the Semantic Scene Completion (SSC) task is to simultaneously predict a completed 3D voxel representation of volumetric occupancy and semantic labels of objects in the scene from a single-view observation. Since the computational cost generally increases explosively along with the growth of voxel resolution, most current state-of-the-arts have to tailor their framework into a low-resolution representation with the sacrifice of detail prediction. Thus, voxel resolution becomes one of the crucial difficulties that lead to the performance bottleneck. In this paper, we propose to devise a new geometry-based strategy to embed depth information with low-resolution voxel representation, which could still be able to encode sufficient geometric information, e.g., room layout, object's sizes and shapes, to infer the invisible areas of the scene with well structure-preserving details. To this end, we first propose a novel 3D sketch-aware feature embedding to explicitly encode geometric information effectively and efficiently. With the 3D sketch in hand, we further devise a simple yet effective semantic scene completion framework that incorporates a light-weight 3D Sketch Hallucination module to guide the inference of occupancy and the semantic labels via a semi-supervised structure prior learning strategy. We demonstrate that our proposed geometric embedding works better than the depth feature learning from habitual SSC frameworks. Our final model surpasses state-of-the-arts consistently on three public benchmarks, which only requires 3D volumes of 60 x 36 x 60 resolution for both input and output. The code and the supplementary material will be available at https://charlesCXK.github.io.

Results

TaskDatasetMetricValueModel
ReconstructionNYUv2mIoU22.913DSketch (rgb input - reported in MonoScene paper)
ReconstructionSemanticKITTImIoU6.233DSketch (rgb input - reported in MonoScene paper)
3D ReconstructionNYUv2mIoU41.13DSketch
3D ReconstructionNYUv2mIoU22.913DSketch (rgb input - reported in MonoScene paper)
3D ReconstructionSemanticKITTImIoU6.233DSketch (rgb input - reported in MonoScene paper)
3DNYUv2mIoU41.13DSketch
3DNYUv2mIoU22.913DSketch (rgb input - reported in MonoScene paper)
3DSemanticKITTImIoU6.233DSketch (rgb input - reported in MonoScene paper)
3D Semantic Scene CompletionNYUv2mIoU41.13DSketch
3D Semantic Scene CompletionNYUv2mIoU22.913DSketch (rgb input - reported in MonoScene paper)
3D Semantic Scene CompletionSemanticKITTImIoU6.233DSketch (rgb input - reported in MonoScene paper)
3D Scene ReconstructionNYUv2mIoU22.913DSketch (rgb input - reported in MonoScene paper)
3D Scene ReconstructionSemanticKITTImIoU6.233DSketch (rgb input - reported in MonoScene paper)
Single-View 3D ReconstructionNYUv2mIoU22.913DSketch (rgb input - reported in MonoScene paper)
Single-View 3D ReconstructionSemanticKITTImIoU6.233DSketch (rgb input - reported in MonoScene paper)

Related Papers

Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way2025-07-11Disentangling Instance and Scene Contexts for 3D Semantic Scene Completion2025-07-11UQLM: A Python Package for Uncertainty Quantification in Large Language Models2025-07-08DeepRetro: Retrosynthetic Pathway Discovery using Iterative LLM Reasoning2025-07-07ReLoop: "Seeing Twice and Thinking Backwards" via Closed-loop Training to Mitigate Hallucinations in Multimodal understanding2025-07-07The Future is Agentic: Definitions, Perspectives, and Open Challenges of Multi-Agent Recommender Systems2025-07-02GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language Models2025-07-01