TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bir...

BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation

Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, Song Han

2022-05-26Sensor FusionScene SegmentationAutonomous Driving3D Multi-Object Tracking3D Object DetectionObject Detection
PaperPDFCode(official)Code

Abstract

Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than 40x. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on nuScenes, achieving 1.3% higher mAP and NDS on 3D object detection and 13.6% higher mIoU on BEV map segmentation, with 1.9x lower computation cost. Code to reproduce our results is available at https://github.com/mit-han-lab/bevfusion.

Results

TaskDatasetMetricValueModel
Object DetectionnuScenesNDS0.76BEVFusion-e
Object DetectionnuScenesmAAE0.13BEVFusion-e
Object DetectionnuScenesmAOE0.32BEVFusion-e
Object DetectionnuScenesmAP0.75BEVFusion-e
Object DetectionnuScenesmASE0.23BEVFusion-e
Object DetectionnuScenesmATE0.24BEVFusion-e
Object DetectionnuScenesmAVE0.22BEVFusion-e
3DnuScenesNDS0.76BEVFusion-e
3DnuScenesmAAE0.13BEVFusion-e
3DnuScenesmAOE0.32BEVFusion-e
3DnuScenesmAP0.75BEVFusion-e
3DnuScenesmASE0.23BEVFusion-e
3DnuScenesmATE0.24BEVFusion-e
3DnuScenesmAVE0.22BEVFusion-e
3D Object DetectionnuScenesNDS0.76BEVFusion-e
3D Object DetectionnuScenesmAAE0.13BEVFusion-e
3D Object DetectionnuScenesmAOE0.32BEVFusion-e
3D Object DetectionnuScenesmAP0.75BEVFusion-e
3D Object DetectionnuScenesmASE0.23BEVFusion-e
3D Object DetectionnuScenesmATE0.24BEVFusion-e
3D Object DetectionnuScenesmAVE0.22BEVFusion-e
2D ClassificationnuScenesNDS0.76BEVFusion-e
2D ClassificationnuScenesmAAE0.13BEVFusion-e
2D ClassificationnuScenesmAOE0.32BEVFusion-e
2D ClassificationnuScenesmAP0.75BEVFusion-e
2D ClassificationnuScenesmASE0.23BEVFusion-e
2D ClassificationnuScenesmATE0.24BEVFusion-e
2D ClassificationnuScenesmAVE0.22BEVFusion-e
2D Object DetectionnuScenesNDS0.76BEVFusion-e
2D Object DetectionnuScenesmAAE0.13BEVFusion-e
2D Object DetectionnuScenesmAOE0.32BEVFusion-e
2D Object DetectionnuScenesmAP0.75BEVFusion-e
2D Object DetectionnuScenesmASE0.23BEVFusion-e
2D Object DetectionnuScenesmATE0.24BEVFusion-e
2D Object DetectionnuScenesmAVE0.22BEVFusion-e
16knuScenesNDS0.76BEVFusion-e
16knuScenesmAAE0.13BEVFusion-e
16knuScenesmAOE0.32BEVFusion-e
16knuScenesmAP0.75BEVFusion-e
16knuScenesmASE0.23BEVFusion-e
16knuScenesmATE0.24BEVFusion-e
16knuScenesmAVE0.22BEVFusion-e

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17