TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Rethinking the Open-Loop Evaluation of End-to-End Autonomo...

Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes

Jiang-Tian Zhai, Ze Feng, Jinhao Du, Yongqiang Mao, Jiang-Jiang Liu, Zichang Tan, Yifu Zhang, Xiaoqing Ye, Jingdong Wang

2023-05-17Trajectory PlanningAutonomous Driving
PaperPDFCode(official)

Abstract

Modern autonomous driving systems are typically divided into three main tasks: perception, prediction, and planning. The planning task involves predicting the trajectory of the ego vehicle based on inputs from both internal intention and the external environment, and manipulating the vehicle accordingly. Most existing works evaluate their performance on the nuScenes dataset using the L2 error and collision rate between the predicted trajectories and the ground truth. In this paper, we reevaluate these existing evaluation metrics and explore whether they accurately measure the superiority of different methods. Specifically, we design an MLP-based method that takes raw sensor data (e.g., past trajectory, velocity, etc.) as input and directly outputs the future trajectory of the ego vehicle, without using any perception or prediction information such as camera images or LiDAR. Our simple method achieves similar end-to-end planning performance on the nuScenes dataset with other perception-based methods, reducing the average L2 error by about 20%. Meanwhile, the perception-based methods have an advantage in terms of collision rate. We further conduct in-depth analysis and provide new insights into the factors that are critical for the success of the planning task on nuScenes dataset. Our observation also indicates that we need to rethink the current open-loop evaluation scheme of end-to-end autonomous driving in nuScenes. Codes are available at https://github.com/E2E-AD/AD-MLP.

Results

TaskDatasetMetricValueModel
Industrial RobotsnuScenesCollision-1s0.17AD-MLP
Industrial RobotsnuScenesCollision-2s0.18AD-MLP
Industrial RobotsnuScenesCollision-3s0.24AD-MLP
Industrial RobotsnuScenesCollision-Avg0.19AD-MLP
Industrial RobotsnuScenesL2-1s0.2AD-MLP
Industrial RobotsnuScenesL2-2s0.26AD-MLP
Industrial RobotsnuScenesL2-3s0.41AD-MLP
Industrial RobotsnuScenesL2-Avg0.29AD-MLP
Industrial RobotsnuScenesCollision-1s0.07VAD-Base [jiang2023vad]
Industrial RobotsnuScenesCollision-2s0.1VAD-Base [jiang2023vad]
Industrial RobotsnuScenesCollision-3s0.24VAD-Base [jiang2023vad]
Industrial RobotsnuScenesCollision-Avg0.14VAD-Base [jiang2023vad]
Industrial RobotsnuScenesL2-1s0.17VAD-Base [jiang2023vad]
Industrial RobotsnuScenesL2-2s0.34VAD-Base [jiang2023vad]
Industrial RobotsnuScenesL2-3s0.6VAD-Base [jiang2023vad]
Industrial RobotsnuScenesL2-Avg0.37VAD-Base [jiang2023vad]
Trajectory PlanningnuScenesCollision-1s0.17AD-MLP
Trajectory PlanningnuScenesCollision-2s0.18AD-MLP
Trajectory PlanningnuScenesCollision-3s0.24AD-MLP
Trajectory PlanningnuScenesCollision-Avg0.19AD-MLP
Trajectory PlanningnuScenesL2-1s0.2AD-MLP
Trajectory PlanningnuScenesL2-2s0.26AD-MLP
Trajectory PlanningnuScenesL2-3s0.41AD-MLP
Trajectory PlanningnuScenesL2-Avg0.29AD-MLP
Trajectory PlanningnuScenesCollision-1s0.07VAD-Base [jiang2023vad]
Trajectory PlanningnuScenesCollision-2s0.1VAD-Base [jiang2023vad]
Trajectory PlanningnuScenesCollision-3s0.24VAD-Base [jiang2023vad]
Trajectory PlanningnuScenesCollision-Avg0.14VAD-Base [jiang2023vad]
Trajectory PlanningnuScenesL2-1s0.17VAD-Base [jiang2023vad]
Trajectory PlanningnuScenesL2-2s0.34VAD-Base [jiang2023vad]
Trajectory PlanningnuScenesL2-3s0.6VAD-Base [jiang2023vad]
Trajectory PlanningnuScenesL2-Avg0.37VAD-Base [jiang2023vad]

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17Safeguarding Federated Learning-based Road Condition Classification2025-07-16Towards Autonomous Riding: A Review of Perception, Planning, and Control in Intelligent Two-Wheelers2025-07-16