Damien Robert, Hugo Raguet, Loic Landrieu
We introduce a novel superpoint-based transformer architecture for efficient semantic segmentation of large-scale 3D scenes. Our method incorporates a fast algorithm to partition point clouds into a hierarchical superpoint structure, which makes our preprocessing 7 times faster than existing superpoint-based approaches. Additionally, we leverage a self-attention mechanism to capture the relationships between superpoints at multiple scales, leading to state-of-the-art performance on three challenging benchmark datasets: S3DIS (76.0% mIoU 6-fold validation), KITTI-360 (63.5% on Val), and DALES (79.6%). With only 212k parameters, our approach is up to 200 times more compact than other state-of-the-art models while maintaining similar performance. Furthermore, our model can be trained on a single GPU in 3 hours for a fold of the S3DIS dataset, which is 7x to 70x fewer GPU-hours than the best-performing methods. Our code and models are accessible at github.com/drprojects/superpoint_transformer.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Semantic Segmentation | S3DIS Area5 | mAcc | 77.3 | Superpoint Transformer |
| Semantic Segmentation | S3DIS Area5 | mIoU | 68.9 | Superpoint Transformer |
| Semantic Segmentation | S3DIS Area5 | oAcc | 89.5 | Superpoint Transformer |
| Semantic Segmentation | S3DIS | Mean IoU | 76 | Superpoint Transformer |
| Semantic Segmentation | S3DIS | Params (M) | 0.212 | Superpoint Transformer |
| Semantic Segmentation | S3DIS | mAcc | 85.8 | Superpoint Transformer |
| Semantic Segmentation | S3DIS | mIoU | 76 | Superpoint Transformer |
| Semantic Segmentation | S3DIS | oAcc | 90.4 | Superpoint Transformer |
| Semantic Segmentation | DALES | Overall Accuracy | 97.5 | Superpoint Transformer |
| Semantic Segmentation | DALES | mIoU | 79.6 | Superpoint Transformer |
| Semantic Segmentation | KITTI-360 | miou Val | 63.5 | Superpoint Transformer |
| Semantic Segmentation | S3DIS | mAcc | 85.8 | Superpoint Transformer |
| Semantic Segmentation | S3DIS | mIoU (6-Fold) | 76 | Superpoint Transformer |
| 3D Semantic Segmentation | DALES | Overall Accuracy | 97.5 | Superpoint Transformer |
| 3D Semantic Segmentation | DALES | mIoU | 79.6 | Superpoint Transformer |
| 3D Semantic Segmentation | KITTI-360 | miou Val | 63.5 | Superpoint Transformer |
| 3D Semantic Segmentation | S3DIS | mAcc | 85.8 | Superpoint Transformer |
| 3D Semantic Segmentation | S3DIS | mIoU (6-Fold) | 76 | Superpoint Transformer |
| 10-shot image generation | S3DIS Area5 | mAcc | 77.3 | Superpoint Transformer |
| 10-shot image generation | S3DIS Area5 | mIoU | 68.9 | Superpoint Transformer |
| 10-shot image generation | S3DIS Area5 | oAcc | 89.5 | Superpoint Transformer |
| 10-shot image generation | S3DIS | Mean IoU | 76 | Superpoint Transformer |
| 10-shot image generation | S3DIS | Params (M) | 0.212 | Superpoint Transformer |
| 10-shot image generation | S3DIS | mAcc | 85.8 | Superpoint Transformer |
| 10-shot image generation | S3DIS | mIoU | 76 | Superpoint Transformer |
| 10-shot image generation | S3DIS | oAcc | 90.4 | Superpoint Transformer |
| 10-shot image generation | DALES | Overall Accuracy | 97.5 | Superpoint Transformer |
| 10-shot image generation | DALES | mIoU | 79.6 | Superpoint Transformer |
| 10-shot image generation | KITTI-360 | miou Val | 63.5 | Superpoint Transformer |
| 10-shot image generation | S3DIS | mAcc | 85.8 | Superpoint Transformer |
| 10-shot image generation | S3DIS | mIoU (6-Fold) | 76 | Superpoint Transformer |