TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-scale Interaction for Real-time LiDAR Data Segmentat...

Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform

Shijie Li, Xieyuanli Chen, Yun Liu, Dengxin Dai, Cyrill Stachniss, Juergen Gall

2020-08-20Autonomous VehiclesReal-Time Semantic SegmentationSemantic SegmentationReal-Time 3D Semantic Segmentation3D Semantic Segmentation
PaperPDFCodeCode(official)

Abstract

Real-time semantic segmentation of LiDAR data is crucial for autonomously driving vehicles, which are usually equipped with an embedded platform and have limited computational resources. Approaches that operate directly on the point cloud use complex spatial aggregation operations, which are very expensive and difficult to optimize for embedded platforms. They are therefore not suitable for real-time applications with embedded systems. As an alternative, projection-based methods are more efficient and can run on embedded platforms. However, the current state-of-the-art projection-based methods do not achieve the same accuracy as point-based methods and use millions of parameters. In this paper, we therefore propose a projection-based method, called Multi-scale Interaction Network (MINet), which is very efficient and accurate. The network uses multiple paths with different scales and balances the computational resources between the scales. Additional dense interactions between the scales avoid redundant computations and make the network highly efficient. The proposed network outperforms point-based, image-based, and projection-based methods in terms of accuracy, number of parameters, and runtime. Moreover, the network processes more than 24 scans per second on an embedded platform, which is higher than the framerates of LiDAR sensors. The network is therefore suitable for autonomous vehicles.

Results

TaskDatasetMetricValueModel
Semantic SegmentationSemanticKITTIParameters (M)1MINet
Semantic SegmentationSemanticKITTISpeed (FPS)47MINet
Semantic SegmentationSemanticKITTImIoU55.2MINet
3D Semantic SegmentationSemanticKITTIParameters (M)1MINet
3D Semantic SegmentationSemanticKITTISpeed (FPS)47MINet
3D Semantic SegmentationSemanticKITTImIoU55.2MINet
10-shot image generationSemanticKITTIParameters (M)1MINet
10-shot image generationSemanticKITTISpeed (FPS)47MINet
10-shot image generationSemanticKITTImIoU55.2MINet

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15