TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/R-Pred: Two-Stage Motion Prediction Via Tube-Query Attenti...

R-Pred: Two-Stage Motion Prediction Via Tube-Query Attention-Based Trajectory Refinement

Sehwan Choi, Jungho Kim, Junyong Yun, Jun Won Choi

2022-11-16ICCV 2023 1Motion Planningmotion predictionMotion Forecasting
PaperPDF

Abstract

Predicting the future motion of dynamic agents is of paramount importance to ensuring safety and assessing risks in motion planning for autonomous robots. In this study, we propose a two-stage motion prediction method, called R-Pred, designed to effectively utilize both scene and interaction context using a cascade of the initial trajectory proposal and trajectory refinement networks. The initial trajectory proposal network produces M trajectory proposals corresponding to the M modes of the future trajectory distribution. The trajectory refinement network enhances each of the M proposals using 1) tube-query scene attention (TQSA) and 2) proposal-level interaction attention (PIA) mechanisms. TQSA uses tube-queries to aggregate local scene context features pooled from proximity around trajectory proposals of interest. PIA further enhances the trajectory proposals by modeling inter-agent interactions using a group of trajectory proposals selected by their distances from neighboring agents. Our experiments conducted on Argoverse and nuScenes datasets demonstrate that the proposed refinement network provides significant performance improvements compared to the single-stage baseline and that R-Pred achieves state-of-the-art performance in some categories of the benchmarks.

Results

TaskDatasetMetricValueModel
Autonomous VehiclesArgoverse CVPR 2020DAC (K=6)0.992R-Pred
Autonomous VehiclesArgoverse CVPR 2020MR (K=1)0.5344R-Pred
Autonomous VehiclesArgoverse CVPR 2020MR (K=6)0.1165R-Pred
Autonomous VehiclesArgoverse CVPR 2020brier-minFDE (K=6)1.7765R-Pred
Autonomous VehiclesArgoverse CVPR 2020minADE (K=1)1.5843R-Pred
Autonomous VehiclesArgoverse CVPR 2020minADE (K=6)0.7629R-Pred
Autonomous VehiclesArgoverse CVPR 2020minFDE (K=1)3.4718R-Pred
Autonomous VehiclesArgoverse CVPR 2020minFDE (K=6)1.1236R-Pred
Motion ForecastingArgoverse CVPR 2020DAC (K=6)0.992R-Pred
Motion ForecastingArgoverse CVPR 2020MR (K=1)0.5344R-Pred
Motion ForecastingArgoverse CVPR 2020MR (K=6)0.1165R-Pred
Motion ForecastingArgoverse CVPR 2020brier-minFDE (K=6)1.7765R-Pred
Motion ForecastingArgoverse CVPR 2020minADE (K=1)1.5843R-Pred
Motion ForecastingArgoverse CVPR 2020minADE (K=6)0.7629R-Pred
Motion ForecastingArgoverse CVPR 2020minFDE (K=1)3.4718R-Pred
Motion ForecastingArgoverse CVPR 2020minFDE (K=6)1.1236R-Pred
Autonomous DrivingArgoverse CVPR 2020DAC (K=6)0.992R-Pred
Autonomous DrivingArgoverse CVPR 2020MR (K=1)0.5344R-Pred
Autonomous DrivingArgoverse CVPR 2020MR (K=6)0.1165R-Pred
Autonomous DrivingArgoverse CVPR 2020brier-minFDE (K=6)1.7765R-Pred
Autonomous DrivingArgoverse CVPR 2020minADE (K=1)1.5843R-Pred
Autonomous DrivingArgoverse CVPR 2020minADE (K=6)0.7629R-Pred
Autonomous DrivingArgoverse CVPR 2020minFDE (K=1)3.4718R-Pred
Autonomous DrivingArgoverse CVPR 2020minFDE (K=6)1.1236R-Pred

Related Papers

Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16ILNet: Trajectory Prediction with Inverse Learning Attention for Enhancing Intention Capture2025-07-09Stochastic Human Motion Prediction with Memory of Action Transition and Action Characteristic2025-07-05Temporal Continual Learning with Prior Compensation for Human Motion Prediction2025-07-05Epona: Autoregressive Diffusion World Model for Autonomous Driving2025-06-30GoIRL: Graph-Oriented Inverse Reinforcement Learning for Multimodal Trajectory Prediction2025-06-26Ark: An Open-source Python-based Framework for Robot Learning2025-06-24Drive-R1: Bridging Reasoning and Planning in VLMs for Autonomous Driving with Reinforcement Learning2025-06-23