Long-Range Transformers for Dynamic Spatiotemporal Forecasting
- URL: http://arxiv.org/abs/2109.12218v3
- Date: Sat, 18 Mar 2023 03:03:25 GMT
- Title: Long-Range Transformers for Dynamic Spatiotemporal Forecasting
- Authors: Jake Grigsby, Zhe Wang, Nam Nguyen, Yanjun Qi
- Abstract summary: Methods based on graph neural networks explicitly model variable relationships.
Long-Range Transformers can learn interactions between time, value, and information jointly along this extended sequence.
- Score: 16.37467119526305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multivariate time series forecasting focuses on predicting future values
based on historical context. State-of-the-art sequence-to-sequence models rely
on neural attention between timesteps, which allows for temporal learning but
fails to consider distinct spatial relationships between variables. In
contrast, methods based on graph neural networks explicitly model variable
relationships. However, these methods often rely on predefined graphs that
cannot change over time and perform separate spatial and temporal updates
without establishing direct connections between each variable at every
timestep. Our work addresses these problems by translating multivariate
forecasting into a "spatiotemporal sequence" formulation where each Transformer
input token represents the value of a single variable at a given time.
Long-Range Transformers can then learn interactions between space, time, and
value information jointly along this extended sequence. Our method, which we
call Spacetimeformer, achieves competitive results on benchmarks from traffic
forecasting to electricity demand and weather prediction while learning
spatiotemporal relationships purely from data.
Related papers
- TimeCNN: Refining Cross-Variable Interaction on Time Point for Time Series Forecasting [44.04862924157323]
Transformer-based models demonstrate significant potential in modeling cross-time and cross-variable interaction.
We propose a TimeCNN model to refine cross-variable interactions to enhance time series forecasting.
Extensive experiments conducted on 12 real-world datasets demonstrate that TimeCNN consistently outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-10-07T09:16:58Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers [55.475142494272724]
Time series prediction is crucial for understanding and forecasting complex dynamics in various domains.
We introduce GridTST, a model that combines the benefits of two approaches using innovative multi-directional attentions.
The model consistently delivers state-of-the-art performance across various real-world datasets.
arXiv Detail & Related papers (2024-05-22T16:41:21Z) - A Variational Autoencoder for Neural Temporal Point Processes with
Dynamic Latent Graphs [45.98786737053273]
We propose a novel variational auto-encoder to capture such a mixture of temporal dynamics.
The model predicts the future event times, by using the learned dependency graph to remove the noncontributing influences of past events.
arXiv Detail & Related papers (2023-12-26T15:11:55Z) - Learning the Evolutionary and Multi-scale Graph Structure for
Multivariate Time Series Forecasting [50.901984244738806]
We show how to model the evolutionary and multi-scale interactions of time series.
In particular, we first provide a hierarchical graph structure cooperated with the dilated convolution to capture the scale-specific correlations.
A unified neural network is provided to integrate the components above to get the final prediction.
arXiv Detail & Related papers (2022-06-28T08:11:12Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Pay Attention to Evolution: Time Series Forecasting with Deep
Graph-Evolution Learning [33.79957892029931]
This work presents a novel neural network architecture for time-series forecasting.
We named our method Recurrent Graph Evolution Neural Network (ReGENN)
An extensive set of experiments was conducted comparing ReGENN with dozens of ensemble methods and classical statistical ones.
arXiv Detail & Related papers (2020-08-28T20:10:07Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.