STformer: A Noise-Aware Efficient Spatio-Temporal Transformer
Architecture for Traffic Forecasting
- URL: http://arxiv.org/abs/2112.02740v1
- Date: Mon, 6 Dec 2021 02:17:39 GMT
- Title: STformer: A Noise-Aware Efficient Spatio-Temporal Transformer
Architecture for Traffic Forecasting
- Authors: Yanjun Qin, Yuchen Fang, Haiyong Luo, Liang Zeng, Fang Zhao, Chenxing
Wang
- Abstract summary: We propose a novel noise-aware efficient-temporal architecture for accurate traffic forecasting, named STformer.
STformer consists of two components, which are the noise-aware temporal self-attention (NATSA) and the graph-based sparse spatial self-attention (GBS3A)
Experiments on four real-world traffic datasets show that STformer outperforms state-of-the-art baselines with lower computational cost.
- Score: 7.230415327436048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic forecasting plays an indispensable role in the intelligent
transportation system, which makes daily travel more convenient and safer.
However, the dynamic evolution of spatio-temporal correlations makes accurate
traffic forecasting very difficult. Existing work mainly employs graph neural
netwroks (GNNs) and deep time series models (e.g., recurrent neural networks)
to capture complex spatio-temporal patterns in the dynamic traffic system. For
the spatial patterns, it is difficult for GNNs to extract the global spatial
information, i.e., remote sensors information in road networks. Although we can
use the self-attention to extract global spatial information as in the previous
work, it is also accompanied by huge resource consumption. For the temporal
patterns, traffic data have not only easy-to-recognize daily and weekly trends
but also difficult-to-recognize short-term noise caused by accidents (e.g., car
accidents and thunderstorms). Prior traffic models are difficult to distinguish
intricate temporal patterns in time series and thus hard to get accurate
temporal dependence. To address above issues, we propose a novel noise-aware
efficient spatio-temporal Transformer architecture for accurate traffic
forecasting, named STformer. STformer consists of two components, which are the
noise-aware temporal self-attention (NATSA) and the graph-based sparse spatial
self-attention (GBS3A). NATSA separates the high-frequency component and the
low-frequency component from the time series to remove noise and capture stable
temporal dependence by the learnable filter and the temporal self-attention,
respectively. GBS3A replaces the full query in vanilla self-attention with the
graph-based sparse query to decrease the time and memory usage. Experiments on
four real-world traffic datasets show that STformer outperforms
state-of-the-art baselines with lower computational cost.
Related papers
- Improving Traffic Flow Predictions with SGCN-LSTM: A Hybrid Model for Spatial and Temporal Dependencies [55.2480439325792]
This paper introduces the Signal-Enhanced Graph Convolutional Network Long Short Term Memory (SGCN-LSTM) model for predicting traffic speeds across road networks.
Experiments on the PEMS-BAY road network traffic dataset demonstrate the SGCN-LSTM model's effectiveness.
arXiv Detail & Related papers (2024-11-01T00:37:00Z) - Navigating Spatio-Temporal Heterogeneity: A Graph Transformer Approach for Traffic Forecasting [13.309018047313801]
Traffic forecasting has emerged as a crucial research area in the development of smart cities.
Recent advancements in network modeling for most-temporal correlations are starting to see diminishing returns in performance.
To tackle these challenges, we introduce the Spatio-Temporal Graph Transformer (STGormer)
We design two straightforward yet effective spatial encoding methods based on the structure and integrate time position into the vanilla transformer to capture-temporal traffic patterns.
arXiv Detail & Related papers (2024-08-20T13:18:21Z) - Dynamic Frequency Domain Graph Convolutional Network for Traffic
Forecasting [33.538633286142264]
Time-Shift of traffic patterns and noise induced by random factors hinder data-driven spatial dependence modeling.
We propose a novel dynamic frequency domain graph convolution network (DFDGCN) to capture spatial dependencies.
Our model is effective and outperforms the baselines in experiments on four real-world datasets.
arXiv Detail & Related papers (2023-12-19T08:20:09Z) - Attention-based Spatial-Temporal Graph Convolutional Recurrent Networks
for Traffic Forecasting [12.568905377581647]
Traffic forecasting is one of the most fundamental problems in transportation science and artificial intelligence.
Existing methods cannot accurately model both long-term and short-term temporal correlations simultaneously.
We propose a novel spatial-temporal neural network framework, which consists of a graph convolutional recurrent module (GCRN) and a global attention module.
arXiv Detail & Related papers (2023-02-25T03:37:00Z) - PDFormer: Propagation Delay-Aware Dynamic Long-Range Transformer for
Traffic Flow Prediction [78.05103666987655]
spatial-temporal Graph Neural Network (GNN) models have emerged as one of the most promising methods to solve this problem.
We propose a novel propagation delay-aware dynamic long-range transFormer, namely PDFormer, for accurate traffic flow prediction.
Our method can not only achieve state-of-the-art performance but also exhibit competitive computational efficiency.
arXiv Detail & Related papers (2023-01-19T08:42:40Z) - STLGRU: Spatio-Temporal Lightweight Graph GRU for Traffic Flow
Prediction [0.40964539027092917]
We propose STLGRU, a novel traffic forecasting model for predicting traffic flow accurately.
Our proposed STLGRU can effectively capture dynamic local and global spatial-temporal relations of traffic networks.
Our method can not only achieve state-of-the-art performance but also exhibit competitive computational efficiency.
arXiv Detail & Related papers (2022-12-08T20:24:59Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - STJLA: A Multi-Context Aware Spatio-Temporal Joint Linear Attention
Network for Traffic Forecasting [7.232141271583618]
We propose a novel deep learning model for traffic forecasting named inefficient-Context Spatio-Temporal Joint Linear Attention (SSTLA)
SSTLA applies linear attention to a joint graph to capture global dependence between alltemporal- nodes efficiently.
Experiments on two real-world traffic datasets, England and Temporal7, demonstrate that our STJLA can achieve 9.83% and 3.08% 3.08% accuracy in MAE measure over state-of-the-art baselines.
arXiv Detail & Related papers (2021-12-04T06:39:18Z) - DS-Net: Dynamic Spatiotemporal Network for Video Salient Object
Detection [78.04869214450963]
We propose a novel dynamic temporal-temporal network (DSNet) for more effective fusion of temporal and spatial information.
We show that the proposed method achieves superior performance than state-of-the-art algorithms.
arXiv Detail & Related papers (2020-12-09T06:42:30Z) - Constructing Geographic and Long-term Temporal Graph for Traffic
Forecasting [88.5550074808201]
We propose Geographic and Long term Temporal Graph Convolutional Recurrent Neural Network (GLT-GCRNN) for traffic forecasting.
In this work, we propose a novel framework for traffic forecasting that learns the rich interactions between roads sharing similar geographic or longterm temporal patterns.
arXiv Detail & Related papers (2020-04-23T03:50:46Z) - Spatial-Temporal Transformer Networks for Traffic Flow Forecasting [74.76852538940746]
We propose a novel paradigm of Spatial-Temporal Transformer Networks (STTNs) to improve the accuracy of long-term traffic forecasting.
Specifically, we present a new variant of graph neural networks, named spatial transformer, by dynamically modeling directed spatial dependencies.
The proposed model enables fast and scalable training over a long range spatial-temporal dependencies.
arXiv Detail & Related papers (2020-01-09T10:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.