SageFormer: Series-Aware Framework for Long-term Multivariate Time
Series Forecasting
- URL: http://arxiv.org/abs/2307.01616v2
- Date: Fri, 27 Oct 2023 02:13:47 GMT
- Title: SageFormer: Series-Aware Framework for Long-term Multivariate Time
Series Forecasting
- Authors: Zhenwei Zhang, Linghang Meng, Yuantao Gu
- Abstract summary: This paper introduces a novel series-aware framework, explicitly designed to emphasize the significance of inter-series dependencies.
As a Series-aware Graph-enhanced Transformer model, SageFormer proficiently discerns and models the intricate relationships between series using graph structures.
Notably, the series-aware framework seamlessly integrates with existing Transformer-based models, enriching their ability to comprehend inter-series relationships.
- Score: 18.426757319402174
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the burgeoning ecosystem of Internet of Things, multivariate time series
(MTS) data has become ubiquitous, highlighting the fundamental role of time
series forecasting across numerous applications. The crucial challenge of
long-term MTS forecasting requires adept models capable of capturing both
intra- and inter-series dependencies. Recent advancements in deep learning,
notably Transformers, have shown promise. However, many prevailing methods
either marginalize inter-series dependencies or overlook them entirely. To
bridge this gap, this paper introduces a novel series-aware framework,
explicitly designed to emphasize the significance of such dependencies. At the
heart of this framework lies our specific implementation: the SageFormer. As a
Series-aware Graph-enhanced Transformer model, SageFormer proficiently discerns
and models the intricate relationships between series using graph structures.
Beyond capturing diverse temporal patterns, it also curtails redundant
information across series. Notably, the series-aware framework seamlessly
integrates with existing Transformer-based models, enriching their ability to
comprehend inter-series relationships. Extensive experiments on real-world and
synthetic datasets validate the superior performance of SageFormer against
contemporary state-of-the-art approaches.
Related papers
- Towards Long-Context Time Series Foundation Models [17.224575072056627]
Time series foundation models have shown impressive performance on a variety of tasks, across a wide range of domains, even in zero-shot settings.
This study bridges the gap by systematically comparing various context expansion techniques from both language and time series domains.
arXiv Detail & Related papers (2024-09-20T14:19:59Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Expressing Multivariate Time Series as Graphs with Time Series Attention
Transformer [14.172091921813065]
We propose the Time Series Attention Transformer (TSAT) for multivariate time series representation learning.
Using TSAT, we represent both temporal information and inter-dependencies of time series in terms of edge-enhanced dynamic graphs.
We show that TSAT clearly outerperforms six state-of-the-art baseline methods in various forecasting horizons.
arXiv Detail & Related papers (2022-08-19T12:25:56Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.