Semantic Scene Completion Combining Colour and Depth: preliminary experiments
Andre Bernardes Soares Guedes, Teofilo Emidio de Campos, Adrian Hilton
2018-02-133D Semantic Scene Completion
Abstract
Semantic scene completion is the task of producing a complete 3D voxel representation of volumetric occupancy with semantic labels for a scene from a single-view observation. We built upon the recent work of Song et al. (CVPR 2017), who proposed SSCnet, a method that performs scene completion and semantic labelling in a single end-to-end 3D convolutional network. SSCnet uses only depth maps as input, even though depth maps are usually obtained from devices that also capture colour information, such as RGBD sensors and stereo cameras. In this work, we investigate the potential of the RGB colour channels to improve SSCnet.
Results
Related Papers
Disentangling Instance and Scene Contexts for 3D Semantic Scene Completion2025-07-11Camera-Only 3D Panoptic Scene Completion for Autonomous Driving through Differentiable Object Shapes2025-05-14SGFormer: Satellite-Ground Fusion for 3D Semantic Scene Completion2025-03-21VLScene: Vision-Language Guidance Distillation for Camera-Based 3D Semantic Scene Completion2025-03-08Vision-based 3D Semantic Scene Completion via Capture Dynamic Representations2025-03-08Learning Temporal 3D Semantic Scene Completion via Optical Flow Guidance2025-02-20Skip Mamba Diffusion for Monocular 3D Semantic Scene Completion2025-01-13SOAP: Vision-Centric 3D Semantic Scene Completion with Scene-Adaptive Decoder and Occluded Region-Aware View Projection2025-01-01