TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Efficient Large-Scale Traffic Forecasting with Transformer...

Efficient Large-Scale Traffic Forecasting with Transformers: A Spatial Data Management Perspective

Yuchen Fang, Yuxuan Liang, Bo Hui, Zezhi Shao, Liwei Deng, Xu Liu, Xinke Jiang, Kai Zheng

2024-12-13Traffic PredictionManagement
PaperPDFCodeCode(official)Code

Abstract

Road traffic forecasting is crucial in real-world intelligent transportation scenarios like traffic dispatching and path planning in city management and personal traveling. Spatio-temporal graph neural networks (STGNNs) stand out as the mainstream solution in this task. Nevertheless, the quadratic complexity of remarkable dynamic spatial modeling-based STGNNs has become the bottleneck over large-scale traffic data. From the spatial data management perspective, we present a novel Transformer framework called PatchSTG to efficiently and dynamically model spatial dependencies for large-scale traffic forecasting with interpretability and fidelity. Specifically, we design a novel irregular spatial patching to reduce the number of points involved in the dynamic calculation of Transformer. The irregular spatial patching first utilizes the leaf K-dimensional tree (KDTree) to recursively partition irregularly distributed traffic points into leaf nodes with a small capacity, and then merges leaf nodes belonging to the same subtree into occupancy-equaled and non-overlapped patches through padding and backtracking. Based on the patched data, depth and breadth attention are used interchangeably in the encoder to dynamically learn local and global spatial knowledge from points in a patch and points with the same index of patches. Experimental results on four real world large-scale traffic datasets show that our PatchSTG achieves train speed and memory utilization improvements up to $10\times$ and $4\times$ with the state-of-the-art performance.

Results

TaskDatasetMetricValueModel
Traffic PredictionLargeSTCA MAE17.35PatchSTG
Traffic PredictionLargeSTGBA MAE19.5PatchSTG
Traffic PredictionLargeSTGLA MAE18.96PatchSTG
Traffic PredictionLargeSTSD MAE16.9PatchSTG

Related Papers

Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17Federated Learning with Graph-Based Aggregation for Traffic Forecasting2025-07-13Unpatchable Vulnerabilities in Windows 10/11: Security Report 20252025-07-10DT4PCP: A Digital Twin Framework for Personalized Care Planning Applied to Type 2 Diabetes Management2025-07-10RAPS-3D: Efficient interactive segmentation for 3D radiological imaging2025-07-10Vers un cadre ontologique pour la gestion des comp{é}tences : {à} des fins de formation, de recrutement, de m{é}tier, ou de recherches associ{é}es2025-07-08AI-Based Demand Forecasting and Load Balancing for Optimising Energy use in Healthcare Systems: A real case study2025-07-08