TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Wayformer: Motion Forecasting via Simple & Efficient Atten...

Wayformer: Motion Forecasting via Simple & Efficient Attention Networks

Nigamaa Nayakanti, Rami Al-Rfou, Aurick Zhou, Kratarth Goel, Khaled S. Refaat, Benjamin Sapp

2022-07-12PhilosophyMotion ForecastingAutonomous Driving
PaperPDFCodeCodeCodeCode

Abstract

Motion forecasting for autonomous driving is a challenging task because complex driving scenarios result in a heterogeneous mix of static and dynamic inputs. It is an open problem how best to represent and fuse information about road geometry, lane connectivity, time-varying traffic light state, and history of a dynamic set of agents and their interactions into an effective encoding. To model this diverse set of input features, many approaches proposed to design an equally complex system with a diverse set of modality specific modules. This results in systems that are difficult to scale, extend, or tune in rigorous ways to trade off quality and efficiency. In this paper, we present Wayformer, a family of attention based architectures for motion forecasting that are simple and homogeneous. Wayformer offers a compact model description consisting of an attention based scene encoder and a decoder. In the scene encoder we study the choice of early, late and hierarchical fusion of the input modalities. For each fusion type we explore strategies to tradeoff efficiency and quality via factorized attention or latent query attention. We show that early fusion, despite its simplicity of construction, is not only modality agnostic but also achieves state-of-the-art results on both Waymo Open MotionDataset (WOMD) and Argoverse leaderboards, demonstrating the effectiveness of our design philosophy

Results

TaskDatasetMetricValueModel
Autonomous VehiclesArgoverse CVPR 2020DAC (K=6)0.9893Wayformer
Autonomous VehiclesArgoverse CVPR 2020MR (K=1)0.5716Wayformer
Autonomous VehiclesArgoverse CVPR 2020MR (K=6)0.1186Wayformer
Autonomous VehiclesArgoverse CVPR 2020brier-minFDE (K=6)1.7408Wayformer
Autonomous VehiclesArgoverse CVPR 2020minADE (K=1)1.636Wayformer
Autonomous VehiclesArgoverse CVPR 2020minADE (K=6)0.7676Wayformer
Autonomous VehiclesArgoverse CVPR 2020minFDE (K=1)3.6559Wayformer
Autonomous VehiclesArgoverse CVPR 2020minFDE (K=6)1.1616Wayformer
Motion ForecastingArgoverse CVPR 2020DAC (K=6)0.9893Wayformer
Motion ForecastingArgoverse CVPR 2020MR (K=1)0.5716Wayformer
Motion ForecastingArgoverse CVPR 2020MR (K=6)0.1186Wayformer
Motion ForecastingArgoverse CVPR 2020brier-minFDE (K=6)1.7408Wayformer
Motion ForecastingArgoverse CVPR 2020minADE (K=1)1.636Wayformer
Motion ForecastingArgoverse CVPR 2020minADE (K=6)0.7676Wayformer
Motion ForecastingArgoverse CVPR 2020minFDE (K=1)3.6559Wayformer
Motion ForecastingArgoverse CVPR 2020minFDE (K=6)1.1616Wayformer
Autonomous DrivingArgoverse CVPR 2020DAC (K=6)0.9893Wayformer
Autonomous DrivingArgoverse CVPR 2020MR (K=1)0.5716Wayformer
Autonomous DrivingArgoverse CVPR 2020MR (K=6)0.1186Wayformer
Autonomous DrivingArgoverse CVPR 2020brier-minFDE (K=6)1.7408Wayformer
Autonomous DrivingArgoverse CVPR 2020minADE (K=1)1.636Wayformer
Autonomous DrivingArgoverse CVPR 2020minADE (K=6)0.7676Wayformer
Autonomous DrivingArgoverse CVPR 2020minFDE (K=1)3.6559Wayformer
Autonomous DrivingArgoverse CVPR 2020minFDE (K=6)1.1616Wayformer

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17Safeguarding Federated Learning-based Road Condition Classification2025-07-16Towards Autonomous Riding: A Review of Perception, Planning, and Control in Intelligent Two-Wheelers2025-07-16