Two-Stage Framework for Seasonal Time Series Forecasting
- URL: http://arxiv.org/abs/2103.02144v1
- Date: Wed, 3 Mar 2021 02:53:39 GMT
- Title: Two-Stage Framework for Seasonal Time Series Forecasting
- Authors: Qingyang Xu, Qingsong Wen, Liang Sun
- Abstract summary: Seasonal time series Forecasting remains a challenging problem due to the long-term dependency from seasonality.
We propose a two-stage framework to forecast univariable seasonal time series.
We show that incorporating the intermediate results generated in the first stage to existing forecast models can effectively enhance their prediction performance.
- Score: 9.359683664929957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Seasonal time series Forecasting remains a challenging problem due to the
long-term dependency from seasonality. In this paper, we propose a two-stage
framework to forecast univariate seasonal time series. The first stage
explicitly learns the long-range time series structure in a time window beyond
the forecast horizon. By incorporating the learned long-range structure, the
second stage can enhance the prediction accuracy in the forecast horizon. In
both stages, we integrate the auto-regressive model with neural networks to
capture both linear and non-linear characteristics in time series. Our
framework achieves state-of-the-art performance on M4 Competition Hourly
datasets. In particular, we show that incorporating the intermediate results
generated in the first stage to existing forecast models can effectively
enhance their prediction performance.
Related papers
- Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Test Time Learning for Time Series Forecasting [1.4605709124065924]
Test-Time Training (TTT) modules consistently outperform state-of-the-art models, including the Mamba-based TimeMachine.
Our results show significant improvements in Mean Squared Error (MSE) and Mean Absolute Error (MAE)
This work sets a new benchmark for time-series forecasting and lays the groundwork for future research in scalable, high-performance forecasting models.
arXiv Detail & Related papers (2024-09-21T04:40:08Z) - Generative Pretrained Hierarchical Transformer for Time Series Forecasting [3.739587363053192]
We propose a novel generative pretrained hierarchical transformer architecture for forecasting, named textbfGPHT.
We conduct sufficient experiments on eight datasets with mainstream self-supervised pretraining models and supervised models.
The results demonstrated that GPHT surpasses the baseline models across various fine-tuning and zero/few-shot learning settings in the traditional long-term forecasting task.
arXiv Detail & Related papers (2024-02-26T11:54:54Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - Split Time Series into Patches: Rethinking Long-term Series Forecasting
with Dateformer [17.454822366228335]
Time is one of the most significant characteristics of time-series, yet has received insufficient attention.
We propose Dateformer who turns attention to modeling time instead of following the above practice.
Dateformer yields state-of-the-art accuracy with a 40% remarkable relative improvement, and broadens the maximum credible forecasting range to a half-yearly level.
arXiv Detail & Related papers (2022-07-12T08:58:44Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Feature-weighted Stacking for Nonseasonal Time Series Forecasts: A Case
Study of the COVID-19 Epidemic Curves [0.0]
We investigate ensembling techniques in forecasting and examine their potential for use in nonseasonal time-series.
We propose using late data fusion, using a stacked ensemble of two forecasting models and two meta-features that prove their predictive power during a preliminary forecasting stage.
arXiv Detail & Related papers (2021-08-19T14:44:46Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - Prediction of hierarchical time series using structured regularization
and its application to artificial neural networks [4.696083734269231]
We discuss the prediction of hierarchical time series, where each upper-level time series is calculated by summing appropriate lower-level time series.
Forecasts for such hierarchical time series should be coherent, meaning that the forecast for an upper-level time series equals the sum of forecasts for corresponding lower-level time series.
arXiv Detail & Related papers (2020-07-30T00:30:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.