Improving the Accuracy of Global Forecasting Models using Time Series
Data Augmentation
- URL: http://arxiv.org/abs/2008.02663v1
- Date: Thu, 6 Aug 2020 13:52:20 GMT
- Title: Improving the Accuracy of Global Forecasting Models using Time Series
Data Augmentation
- Authors: Kasun Bandara, Hansika Hewamalage, Yuan-Hao Liu, Yanfei Kang,
Christoph Bergmeir
- Abstract summary: Forecasting models that are trained across sets of many time series, known as Global Forecasting Models (GFM), have shown promising results in forecasting competitions and real-world applications.
We propose a novel, data augmentation based forecasting framework that is capable of improving the baseline accuracy of GFM models in less data-abundant settings.
- Score: 7.38079566297881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Forecasting models that are trained across sets of many time series, known as
Global Forecasting Models (GFM), have shown recently promising results in
forecasting competitions and real-world applications, outperforming many
state-of-the-art univariate forecasting techniques. In most cases, GFMs are
implemented using deep neural networks, and in particular Recurrent Neural
Networks (RNN), which require a sufficient amount of time series to estimate
their numerous model parameters. However, many time series databases have only
a limited number of time series. In this study, we propose a novel, data
augmentation based forecasting framework that is capable of improving the
baseline accuracy of the GFM models in less data-abundant settings. We use
three time series augmentation techniques: GRATIS, moving block bootstrap
(MBB), and dynamic time warping barycentric averaging (DBA) to synthetically
generate a collection of time series. The knowledge acquired from these
augmented time series is then transferred to the original dataset using two
different approaches: the pooled approach and the transfer learning approach.
When building GFMs, in the pooled approach, we train a model on the augmented
time series alongside the original time series dataset, whereas in the transfer
learning approach, we adapt a pre-trained model to the new dataset. In our
evaluation on competition and real-world time series datasets, our proposed
variants can significantly improve the baseline accuracy of GFM models and
outperform state-of-the-art univariate forecasting methods.
Related papers
- Chronos: Learning the Language of Time Series [79.38691251254173]
Chronos is a framework for pretrained probabilistic time series models.
We show that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks.
arXiv Detail & Related papers (2024-03-12T16:53:54Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - FrAug: Frequency Domain Augmentation for Time Series Forecasting [6.508992154478217]
Data augmentation (DA) has become a de facto solution to expand training data size for deep learning.
This paper proposes simple yet effective frequency domain augmentation techniques that ensure the semantic consistency of augmented data-label pairs in forecasting.
Our results show that FrAug can boost the forecasting accuracy of TSF models in most cases.
arXiv Detail & Related papers (2023-02-18T11:25:42Z) - Pre-training Enhanced Spatial-temporal Graph Neural Network for
Multivariate Time Series Forecasting [13.441945545904504]
We propose a novel framework, in which STGNN is Enhanced by a scalable time series Pre-training model (STEP)
Specifically, we design a pre-training model to efficiently learn temporal patterns from very long-term history time series.
Our framework is capable of significantly enhancing downstream STGNNs, and our pre-training model aptly captures temporal patterns.
arXiv Detail & Related papers (2022-06-18T04:24:36Z) - Ensembles of Localised Models for Time Series Forecasting [7.199741890914579]
We study how ensembling techniques can be used with generic GFMs and univariate models to solve this issue.
Our work systematises and compares relevant current approaches, namely clustering series and training separate submodels per cluster.
We propose a new methodology of clustered ensembles where we train multiple GFMs on different clusters of series.
arXiv Detail & Related papers (2020-12-30T06:33:51Z) - Global Models for Time Series Forecasting: A Simulation Study [2.580765958706854]
We simulate time series from simple data generating processes (DGP), such as Auto Regressive (AR) and Seasonal AR, to complex DGPs, such as Chaotic Logistic Map, Self-Exciting Threshold Auto-Regressive, and Mackey-Glass equations.
The lengths and the number of series in the dataset are varied in different scenarios.
We perform experiments on these datasets using global forecasting models including Recurrent Neural Networks (RNN), Feed-Forward Neural Networks, Pooled Regression (PR) models, and Light Gradient Boosting Models (LGBM)
arXiv Detail & Related papers (2020-12-23T04:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.