TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Safety-Enhanced Autonomous Driving Using Interpretable Sen...

Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer

Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu

2022-07-28Autonomous VehiclesSensor FusionCARLA longest6Scene UnderstandingEvent DetectionAutonomous Driving
PaperPDFCode(official)

Abstract

Large-scale deployment of autonomous vehicles has been continually delayed due to safety concerns. On the one hand, comprehensive scene understanding is indispensable, a lack of which would result in vulnerability to rare but complex traffic situations, such as the sudden emergence of unknown objects. However, reasoning from a global context requires access to sensors of multiple types and adequate fusion of multi-modal sensor signals, which is difficult to achieve. On the other hand, the lack of interpretability in learning models also hampers the safety with unverifiable failure causes. In this paper, we propose a safety-enhanced autonomous driving framework, named Interpretable Sensor Fusion Transformer(InterFuser), to fully process and fuse information from multi-modal multi-view sensors for achieving comprehensive scene understanding and adversarial event detection. Besides, intermediate interpretable features are generated from our framework, which provide more semantics and are exploited to better constrain actions to be within the safe sets. We conducted extensive experiments on CARLA benchmarks, where our model outperforms prior methods, ranking the first on the public CARLA Leaderboard. Our code will be made available at https://github.com/opendilab/InterFuser

Results

TaskDatasetMetricValueModel
Autonomous VehiclesCARLA LeaderboardDriving Score76.18InterFuser
Autonomous VehiclesCARLA LeaderboardInfraction penalty0.84InterFuser
Autonomous VehiclesCARLA LeaderboardRoute Completion88.23InterFuser
Autonomous VehiclesCARLA LeaderboardDriving Score34.15InterFuser (Reproduced)
Autonomous VehiclesCARLA LeaderboardInfraction penalty0.45InterFuser (Reproduced)
Autonomous VehiclesCARLA LeaderboardRoute Completion74.79InterFuser (Reproduced)
Autonomous DrivingCARLA LeaderboardDriving Score76.18InterFuser
Autonomous DrivingCARLA LeaderboardInfraction penalty0.84InterFuser
Autonomous DrivingCARLA LeaderboardRoute Completion88.23InterFuser
Autonomous DrivingCARLA LeaderboardDriving Score34.15InterFuser (Reproduced)
Autonomous DrivingCARLA LeaderboardInfraction penalty0.45InterFuser (Reproduced)
Autonomous DrivingCARLA LeaderboardRoute Completion74.79InterFuser (Reproduced)
CARLA longest6CARLADriving Score47Interfuser
CARLA longest6CARLAInfraction Score0.63Interfuser
CARLA longest6CARLARoute Completion74Interfuser

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18Advancing Complex Wide-Area Scene Understanding with Hierarchical Coresets Selection2025-07-17Argus: Leveraging Multiview Images for Improved 3-D Scene Understanding With Large Language Models2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17