Indexed by:
Abstract:
In the construction of smart cities in the new era, traffic prediction is an important component. Precise traffic flow prediction faces significant challenges due to spatial heterogeneity, dynamic correlations, and uncertainty. Most existing methods typically learn from a single spatial or temporal perspective, or at best combine the two in a limited dual-perspective manner, which limits their ability to capture complex spatio-temporal relationships. In this paper, we propose a novel Multi-view Spatio-Temporal Dynamic Fusion Graph Convolutional Recurrent Network (MSTDFGRN) to address these limitations. The core idea is to learn dynamic spatial dependencies alongside both short- and long-term temporal patterns through multi-view learning. First, we introduce a multi-view spatial convolution module that dynamically fuses static and adaptive graphs in multiple subspaces to learn intrinsic and potential spatial dependencies of nodes. Simultaneously, in the temporal view, we design both short-range and long-range recurrent networks to aggregate spatial domain knowledge of nodes at multiple granularities and capture forward and backward temporal dependencies. Furthermore, we design a spatiotemporal attention model that applies an attention mechanism to each node, capturing global spatio-temporal dependencies. Comprehensive experiments on four real traffic flow datasets demonstrate MSTDFGRN's excellent predictive accuracy. Specifically, compared to the Spatial- Temporal Graph Attention Gated Recurrent Transformer Network model, our method improves the MAE by 4.69% on the PeMS08 dataset.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
COMPUTERS & ELECTRICAL ENGINEERING
ISSN: 0045-7906
Year: 2025
Volume: 123
4 . 0 0 0
JCR@2023
CAS Journal Grade:3
Affiliated Colleges: