Only the Curve Shape Matters: Training Foundation Models for Zero-Shot
Multivariate Time Series Forecasting through Next Curve Shape Prediction
- URL: http://arxiv.org/abs/2402.07570v2
- Date: Mon, 19 Feb 2024 03:21:01 GMT
- Title: Only the Curve Shape Matters: Training Foundation Models for Zero-Shot
Multivariate Time Series Forecasting through Next Curve Shape Prediction
- Authors: Cheng Feng, Long Huang, Denis Krompass
- Abstract summary: We present General Time Transformer (GTT), an encoder-only style foundation model for zero-shot multivariate time series forecasting.
GTT is pretrained on a large dataset of 200M high-quality time series samples spanning diverse domains.
- Score: 6.166295570030646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present General Time Transformer (GTT), an encoder-only style foundation
model for zero-shot multivariate time series forecasting. GTT is pretrained on
a large dataset of 200M high-quality time series samples spanning diverse
domains. In our proposed framework, the task of multivariate time series
forecasting is formulated as a channel-wise next curve shape prediction
problem, where each time series sample is represented as a sequence of
non-overlapping curve shapes with a unified numerical magnitude. GTT is trained
to predict the next curve shape based on a window of past curve shapes in a
channel-wise manner. Experimental results demonstrate that GTT exhibits
superior zero-shot multivariate forecasting capabilities on unseen time series
datasets, even surpassing state-of-the-art supervised baselines. Additionally,
we investigate the impact of varying GTT model parameters and training dataset
scales, observing that the scaling law also holds in the context of zero-shot
multivariate time series forecasting.
Related papers
- Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - TPRNN: A Top-Down Pyramidal Recurrent Neural Network for Time Series
Forecasting [7.08506873242564]
Time series have multi-scale characteristics, i.e., different temporal patterns at different scales.
We propose TPRNN, a Top-down Pyramidal Recurrent Neural Network for time series forecasting.
TPRNN has achieved the state-of-the-art performance with an average improvement of 8.13% in MSE compared to the best baseline.
arXiv Detail & Related papers (2023-12-11T12:21:45Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - On projection methods for functional time series forecasting [0.0]
Two nonparametric methods are presented for forecasting functional time series (FTS)
We address both one-step-ahead forecasting and dynamic updating.
The methods are applied to simulated data, daily electricity demand, and NOx emissions.
arXiv Detail & Related papers (2021-05-10T14:24:38Z) - Global Models for Time Series Forecasting: A Simulation Study [2.580765958706854]
We simulate time series from simple data generating processes (DGP), such as Auto Regressive (AR) and Seasonal AR, to complex DGPs, such as Chaotic Logistic Map, Self-Exciting Threshold Auto-Regressive, and Mackey-Glass equations.
The lengths and the number of series in the dataset are varied in different scenarios.
We perform experiments on these datasets using global forecasting models including Recurrent Neural Networks (RNN), Feed-Forward Neural Networks, Pooled Regression (PR) models, and Light Gradient Boosting Models (LGBM)
arXiv Detail & Related papers (2020-12-23T04:45:52Z) - Series Saliency: Temporal Interpretation for Multivariate Time Series
Forecasting [30.054015098590874]
We present the series saliency framework for temporal interpretation for time series forecasting.
By extracting the "series images" from the sliding windows of the time series, we apply the saliency map segmentation.
Our framework generates temporal interpretations for the time series forecasting task while produces accurate time series forecast.
arXiv Detail & Related papers (2020-12-16T23:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.