TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/F-Cooper: Feature based Cooperative Perception for Autonom...

F-Cooper: Feature based Cooperative Perception for Autonomous Vehicle Edge Computing System Using 3D Point Clouds

Qi Chen

2019-09-13Autonomous VehiclesReal-Time Object DetectionAutonomous Drivingobject-detection3D Object DetectionObject Detection
PaperPDFCode

Abstract

Autonomous vehicles are heavily reliant upon their sensors to perfect the perception of surrounding environments, however, with the current state of technology, the data which a vehicle uses is confined to that from its own sensors. Data sharing between vehicles and/or edge servers is limited by the available network bandwidth and the stringent real-time constraints of autonomous driving applications. To address these issues, we propose a point cloud feature based cooperative perception framework (F-Cooper) for connected autonomous vehicles to achieve a better object detection precision. Not only will feature based data be sufficient for the training process, we also use the features' intrinsically small size to achieve real-time edge computing, without running the risk of congesting the network. Our experiment results show that by fusing features, we are able to achieve a better object detection result, around 10% improvement for detection within 20 meters and 30% for further distances, as well as achieve faster edge computing with a low communication delay, requiring 71 milliseconds in certain feature selections. To the best of our knowledge, we are the first to introduce feature-level data fusion to connected autonomous vehicles for the purpose of enhancing object detection and making real-time edge computing on inter-vehicle data feasible for autonomous vehicles.

Results

TaskDatasetMetricValueModel
Object DetectionOPV2VAP@0.7@CulverCity0.728F-Cooper (PointPillar backbone)
Object DetectionOPV2VAP@0.7@Default0.79F-Cooper (PointPillar backbone)
Object DetectionV2XSetAP0.5 (Noisy)0.715F-Cooper
Object DetectionV2XSetAP0.5 (Perfect)0.84F-Cooper
Object DetectionV2XSetAP0.7 (Noisy)0.469F-Cooper
Object DetectionV2XSetAP0.7 (Perfect)0.68F-Cooper
3DOPV2VAP@0.7@CulverCity0.728F-Cooper (PointPillar backbone)
3DOPV2VAP@0.7@Default0.79F-Cooper (PointPillar backbone)
3DV2XSetAP0.5 (Noisy)0.715F-Cooper
3DV2XSetAP0.5 (Perfect)0.84F-Cooper
3DV2XSetAP0.7 (Noisy)0.469F-Cooper
3DV2XSetAP0.7 (Perfect)0.68F-Cooper
3D Object DetectionOPV2VAP@0.7@CulverCity0.728F-Cooper (PointPillar backbone)
3D Object DetectionOPV2VAP@0.7@Default0.79F-Cooper (PointPillar backbone)
3D Object DetectionV2XSetAP0.5 (Noisy)0.715F-Cooper
3D Object DetectionV2XSetAP0.5 (Perfect)0.84F-Cooper
3D Object DetectionV2XSetAP0.7 (Noisy)0.469F-Cooper
3D Object DetectionV2XSetAP0.7 (Perfect)0.68F-Cooper
2D ClassificationOPV2VAP@0.7@CulverCity0.728F-Cooper (PointPillar backbone)
2D ClassificationOPV2VAP@0.7@Default0.79F-Cooper (PointPillar backbone)
2D ClassificationV2XSetAP0.5 (Noisy)0.715F-Cooper
2D ClassificationV2XSetAP0.5 (Perfect)0.84F-Cooper
2D ClassificationV2XSetAP0.7 (Noisy)0.469F-Cooper
2D ClassificationV2XSetAP0.7 (Perfect)0.68F-Cooper
2D Object DetectionOPV2VAP@0.7@CulverCity0.728F-Cooper (PointPillar backbone)
2D Object DetectionOPV2VAP@0.7@Default0.79F-Cooper (PointPillar backbone)
2D Object DetectionV2XSetAP0.5 (Noisy)0.715F-Cooper
2D Object DetectionV2XSetAP0.5 (Perfect)0.84F-Cooper
2D Object DetectionV2XSetAP0.7 (Noisy)0.469F-Cooper
2D Object DetectionV2XSetAP0.7 (Perfect)0.68F-Cooper
16kOPV2VAP@0.7@CulverCity0.728F-Cooper (PointPillar backbone)
16kOPV2VAP@0.7@Default0.79F-Cooper (PointPillar backbone)
16kV2XSetAP0.5 (Noisy)0.715F-Cooper
16kV2XSetAP0.5 (Perfect)0.84F-Cooper
16kV2XSetAP0.7 (Noisy)0.469F-Cooper
16kV2XSetAP0.7 (Perfect)0.68F-Cooper

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17