TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DSTIGCN: Deformable Spatial-Temporal Interaction Graph Con...

DSTIGCN: Deformable Spatial-Temporal Interaction Graph Convolution Network for Pedestrian Trajectory Prediction

Wangxing Chen, Haifeng Sang, Jinyu Wang, Zishan Zhao

2025-01-16IEEE Transactions on Intelligent Transportation Systems 2025 1Pedestrian Trajectory PredictionTAGAutonomous DrivingPredictionTrajectory Prediction
PaperPDFCode

Abstract

Accurate and reliable pedestrian trajectory prediction can reduce the risk of human-vehicle collisions and predict accidents in advance, which is crucial for developing autonomous driving and intelligent monitoring. Previous trajectory prediction methods face two common problems: 1. ignoring the joint modeling of pedestrians’ complex spatial-temporal interactions, and 2. suffering from the long-tail effect, which prevents accurate capture of the diversity of pedestrians’ future movements. To address these problems, we propose a Deformable Spatial-Temporal Interaction Graph Convolution Network (DSTIGCN). First, we construct a spatial graph and employ the attention mechanism to preliminarily describe the spatial interactions of pedestrians at each moment. To solve problem 1, we design a deformable spatial-temporal interaction module. The module autonomously learns the spatial-temporal interaction relationships of pedestrians through the offset of multiple asymmetric deformable convolution kernels in both spatial and temporal dimensions, thereby achieving joint modeling of complex spatial-temporal interactions. Next, we obtain trajectory representation features through graph convolution and then predict the two-dimensional Gaussian distribution parameters of future trajectories using the Temporal Attention-Gated Temporal Convolution Network (TAG-TCN). To address problem 2, we introduce Latin hypercube sampling to sample the two-dimensional Gaussian distribution of future trajectories, thereby improving the multi-modal prediction effect of the model under limited samples. Experiments on ETH, UCY, and SDD datasets have verified that our method can achieve high-precision prediction of pedestrian future trajectories under limited parameters.

Related Papers

Multi-Strategy Improved Snake Optimizer Accelerated CNN-LSTM-Attention-Adaboost for Trajectory Prediction2025-07-21GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17Safeguarding Federated Learning-based Road Condition Classification2025-07-16