TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Sparse Single Sweep LiDAR Point Cloud Segmentation via Lea...

Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion

Xu Yan, Jiantao Gao, Jie Li, Ruimao Zhang, Zhen Li, Rui Huang, Shuguang Cui

2020-12-07SegmentationAutonomous DrivingSemantic SegmentationPoint Cloud Segmentation3D Semantic Scene Completion from a single RGB image3D Semantic Segmentation3D Semantic Scene Completion
PaperPDFCodeCode(official)

Abstract

LiDAR point cloud analysis is a core task for 3D computer vision, especially for autonomous driving. However, due to the severe sparsity and noise interference in the single sweep LiDAR point cloud, the accurate semantic segmentation is non-trivial to achieve. In this paper, we propose a novel sparse LiDAR point cloud semantic segmentation framework assisted by learned contextual shape priors. In practice, an initial semantic segmentation (SS) of a single sweep point cloud can be achieved by any appealing network and then flows into the semantic scene completion (SSC) module as the input. By merging multiple frames in the LiDAR sequence as supervision, the optimized SSC module has learned the contextual shape priors from sequential LiDAR data, completing the sparse single sweep point cloud to the dense one. Thus, it inherently improves SS optimization through fully end-to-end training. Besides, a Point-Voxel Interaction (PVI) module is proposed to further enhance the knowledge fusion between SS and SSC tasks, i.e., promoting the interaction of incomplete local geometry of point cloud and complete voxel-wise global structure. Furthermore, the auxiliary SSC and PVI modules can be discarded during inference without extra burden for SS. Extensive experiments confirm that our JS3C-Net achieves superior performance on both SemanticKITTI and SemanticPOSS benchmarks, i.e., 4% and 3% improvement correspondingly.

Results

TaskDatasetMetricValueModel
ReconstructionSemanticKITTImIoU8.97JS3C-Net (rgb input - reported in MonoScene paper)
3D ReconstructionSemanticKITTImIoU23.8JS3C-Net
3D ReconstructionSemanticKITTImIoU8.97JS3C-Net (rgb input - reported in MonoScene paper)
3DSemanticKITTImIoU23.8JS3C-Net
3DSemanticKITTImIoU8.97JS3C-Net (rgb input - reported in MonoScene paper)
LIDAR Semantic SegmentationnuScenestest mIoU0.74JS3C-Net
3D Semantic Scene CompletionSemanticKITTImIoU23.8JS3C-Net
3D Semantic Scene CompletionSemanticKITTImIoU8.97JS3C-Net (rgb input - reported in MonoScene paper)
3D Scene ReconstructionSemanticKITTImIoU8.97JS3C-Net (rgb input - reported in MonoScene paper)
Single-View 3D ReconstructionSemanticKITTImIoU8.97JS3C-Net (rgb input - reported in MonoScene paper)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17