TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Cooper: Cooperative Perception for Connected Autonomous Ve...

Cooper: Cooperative Perception for Connected Autonomous Vehicles based on 3D Point Clouds

Qi Chen, Sihai Tang, Qing Yang, Song Fu

2019-05-13Autonomous Vehiclesobject-detection3D Object DetectionObject Detection
PaperPDFCode

Abstract

Autonomous vehicles may make wrong decisions due to inaccurate detection and recognition. Therefore, an intelligent vehicle can combine its own data with that of other vehicles to enhance perceptive ability, and thus improve detection accuracy and driving safety. However, multi-vehicle cooperative perception requires the integration of real world scenes and the traffic of raw sensor data exchange far exceeds the bandwidth of existing vehicular networks. To the best our knowledge, we are the first to conduct a study on raw-data level cooperative perception for enhancing the detection ability of self-driving systems. In this work, relying on LiDAR 3D point clouds, we fuse the sensor data collected from different positions and angles of connected vehicles. A point cloud based 3D object detection method is proposed to work on a diversity of aligned point clouds. Experimental results on KITTI and our collected dataset show that the proposed system outperforms perception by extending sensing area, improving detection accuracy and promoting augmented results. Most importantly, we demonstrate it is possible to transmit point clouds data for cooperative perception via existing vehicular network technologies.

Results

TaskDatasetMetricValueModel
Object DetectionOPV2VAP@0.7@CulverCity0.696Cooper (PointPillar backbone)
Object DetectionOPV2VAP@0.7@Default0.8Cooper (PointPillar backbone)
3DOPV2VAP@0.7@CulverCity0.696Cooper (PointPillar backbone)
3DOPV2VAP@0.7@Default0.8Cooper (PointPillar backbone)
3D Object DetectionOPV2VAP@0.7@CulverCity0.696Cooper (PointPillar backbone)
3D Object DetectionOPV2VAP@0.7@Default0.8Cooper (PointPillar backbone)
2D ClassificationOPV2VAP@0.7@CulverCity0.696Cooper (PointPillar backbone)
2D ClassificationOPV2VAP@0.7@Default0.8Cooper (PointPillar backbone)
2D Object DetectionOPV2VAP@0.7@CulverCity0.696Cooper (PointPillar backbone)
2D Object DetectionOPV2VAP@0.7@Default0.8Cooper (PointPillar backbone)
16kOPV2VAP@0.7@CulverCity0.696Cooper (PointPillar backbone)
16kOPV2VAP@0.7@Default0.8Cooper (PointPillar backbone)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15Fast and Accurate Collision Probability Estimation for Autonomous Vehicles using Adaptive Sigma-Point Sampling2025-07-08ECORE: Energy-Conscious Optimized Routing for Deep Learning Models at the Edge2025-07-08