FPTN: Fast Pure Transformer Network for Traffic Flow Forecasting
- URL: http://arxiv.org/abs/2303.07685v1
- Date: Tue, 14 Mar 2023 07:55:50 GMT
- Title: FPTN: Fast Pure Transformer Network for Traffic Flow Forecasting
- Authors: Junhao Zhang, Junjie Tang, Juncheng Jin, Zehui Qu
- Abstract summary: Traffic flow forecasting is challenging due to the complex correlations in traffic flow data.
Existing Transformer-based methods treat traffic flow forecasting as time series (MTS) forecasting.
We propose a Fast Pure Transformer Network (FPTN) in this paper.
- Score: 6.485778915696199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traffic flow forecasting is challenging due to the intricate spatio-temporal
correlations in traffic flow data. Existing Transformer-based methods usually
treat traffic flow forecasting as multivariate time series (MTS) forecasting.
However, too many sensors can cause a vector with a dimension greater than 800,
which is difficult to process without information loss. In addition, these
methods design complex mechanisms to capture spatial dependencies in MTS,
resulting in slow forecasting speed. To solve the abovementioned problems, we
propose a Fast Pure Transformer Network (FPTN) in this paper. First, the
traffic flow data are divided into sequences along the sensor dimension instead
of the time dimension. Then, to adequately represent complex spatio-temporal
correlations, Three types of embeddings are proposed for projecting these
vectors into a suitable vector space. After that, to capture the complex
spatio-temporal correlations simultaneously in these vectors, we utilize
Transformer encoder and stack it with several layers. Extensive experiments are
conducted with 4 real-world datasets and 13 baselines, which demonstrate that
FPTN outperforms the state-of-the-art on two metrics. Meanwhile, the
computational time of FPTN spent is less than a quarter of other
state-of-the-art Transformer-based models spent, and the requirements for
computing resources are significantly reduced.
Related papers
- Enhanced Traffic Flow Prediction with Multi-Segment Fusion Tensor Graph Convolutional Networks [9.44949364543965]
Existing traffic flow prediction models suffer from limitations in capturing the complex spatial-temporal dependencies within traffic networks.
This study proposes a multi-segment fusion tensor graph convolutional network (MS-FTGCN) for traffic flow prediction.
The results of experiments conducted on two traffic flow datasets demonstrate that the proposed MS-FTGCN outperforms the state-of-the-art models.
arXiv Detail & Related papers (2024-08-08T05:37:17Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - TCTN: A 3D-Temporal Convolutional Transformer Network for Spatiotemporal
Predictive Learning [1.952097552284465]
We propose an algorithm named 3D-temporal convolutional transformer (TCTN), where a transformer-based encoder with temporal convolutional layers is employed to capture short-term and long-term dependencies.
Our proposed algorithm can be easy to implement and trained much faster compared with RNN-based methods thanks to the parallel mechanism of Transformer.
arXiv Detail & Related papers (2021-12-02T10:05:01Z) - TCCT: Tightly-Coupled Convolutional Transformer on Time Series
Forecasting [6.393659160890665]
We propose the concept of tightly-coupled convolutional Transformer(TCCT) and three TCCT architectures.
Our experiments on real-world datasets show that our TCCT architectures could greatly improve the performance of existing state-of-art Transformer models.
arXiv Detail & Related papers (2021-08-29T08:49:31Z) - TransMOT: Spatial-Temporal Graph Transformer for Multiple Object
Tracking [74.82415271960315]
We propose a solution named TransMOT to efficiently model the spatial and temporal interactions among objects in a video.
TransMOT is not only more computationally efficient than the traditional Transformer, but it also achieves better tracking accuracy.
The proposed method is evaluated on multiple benchmark datasets including MOT15, MOT16, MOT17, and MOT20.
arXiv Detail & Related papers (2021-04-01T01:49:05Z) - Spatial-Temporal Transformer Networks for Traffic Flow Forecasting [74.76852538940746]
We propose a novel paradigm of Spatial-Temporal Transformer Networks (STTNs) to improve the accuracy of long-term traffic forecasting.
Specifically, we present a new variant of graph neural networks, named spatial transformer, by dynamically modeling directed spatial dependencies.
The proposed model enables fast and scalable training over a long range spatial-temporal dependencies.
arXiv Detail & Related papers (2020-01-09T10:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.