TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Back-tracing Representative Points for Voting-based 3D Obj...

Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

Bowen Cheng, Lu Sheng, Shaoshuai Shi, Ming Yang, Dong Xu

2021-04-13CVPR 2021 1object-detection3D Object DetectionObject Detection
PaperPDFCode(official)

Abstract

3D object detection in point clouds is a challenging vision task that benefits various applications for understanding the 3D visual world. Lots of recent research focuses on how to exploit end-to-end trainable Hough voting for generating object proposals. However, the current voting strategy can only receive partial votes from the surfaces of potential objects together with severe outlier votes from the cluttered backgrounds, which hampers full utilization of the information from the input point clouds. Inspired by the back-tracing strategy in the conventional Hough voting methods, in this work, we introduce a new 3D object detection method, named as Back-tracing Representative Points Network (BRNet), which generatively back-traces the representative points from the vote centers and also revisits complementary seed points around these generated points, so as to better capture the fine local structural features surrounding the potential objects from the raw point clouds. Therefore, this bottom-up and then top-down strategy in our BRNet enforces mutual consistency between the predicted vote centers and the raw surface points and thus achieves more reliable and flexible object localization and class prediction results. Our BRNet is simple but effective, which significantly outperforms the state-of-the-art methods on two large-scale point cloud datasets, ScanNet V2 (+7.5% in terms of mAP@0.50) and SUN RGB-D (+4.7% in terms of mAP@0.50), while it is still lightweight and efficient. Code will be available at https://github.com/cheng052/BRNet.

Results

TaskDatasetMetricValueModel
Object DetectionSUN-RGBD valmAP@0.2561.1BRNet(Geo only)
Object DetectionSUN-RGBD valmAP@0.543.7BRNet(Geo only)
Object DetectionScanNetV2mAP@0.2566.1BRNet
Object DetectionScanNetV2mAP@0.550.9BRNet
3DSUN-RGBD valmAP@0.2561.1BRNet(Geo only)
3DSUN-RGBD valmAP@0.543.7BRNet(Geo only)
3DScanNetV2mAP@0.2566.1BRNet
3DScanNetV2mAP@0.550.9BRNet
3D Object DetectionSUN-RGBD valmAP@0.2561.1BRNet(Geo only)
3D Object DetectionSUN-RGBD valmAP@0.543.7BRNet(Geo only)
3D Object DetectionScanNetV2mAP@0.2566.1BRNet
3D Object DetectionScanNetV2mAP@0.550.9BRNet
2D ClassificationSUN-RGBD valmAP@0.2561.1BRNet(Geo only)
2D ClassificationSUN-RGBD valmAP@0.543.7BRNet(Geo only)
2D ClassificationScanNetV2mAP@0.2566.1BRNet
2D ClassificationScanNetV2mAP@0.550.9BRNet
2D Object DetectionSUN-RGBD valmAP@0.2561.1BRNet(Geo only)
2D Object DetectionSUN-RGBD valmAP@0.543.7BRNet(Geo only)
2D Object DetectionScanNetV2mAP@0.2566.1BRNet
2D Object DetectionScanNetV2mAP@0.550.9BRNet
16kSUN-RGBD valmAP@0.2561.1BRNet(Geo only)
16kSUN-RGBD valmAP@0.543.7BRNet(Geo only)
16kScanNetV2mAP@0.2566.1BRNet
16kScanNetV2mAP@0.550.9BRNet

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15ECORE: Energy-Conscious Optimized Routing for Deep Learning Models at the Edge2025-07-08Beyond One Shot, Beyond One Perspective: Cross-View and Long-Horizon Distillation for Better LiDAR Representations2025-07-07