A Combination Model Based on Sequential General Variational Mode Decomposition Method for Time Series Prediction
- URL: http://arxiv.org/abs/2406.03157v2
- Date: Fri, 7 Jun 2024 08:43:24 GMT
- Title: A Combination Model Based on Sequential General Variational Mode Decomposition Method for Time Series Prediction
- Authors: Wei Chen, Yuanyuan Yang, Jianyu Liu,
- Abstract summary: We construct a new SGVMD-ARIMA combination model in a non-linear way to predict financial time series.
Within the prediction interval, our proposed combination model has improved advantages over traditional decomposition prediction control group models.
- Score: 11.11205499754577
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Accurate prediction of financial time series is a key concern for market economy makers and investors. The article selects online store sales and Australian beer sales as representatives of non-stationary, trending, and seasonal financial time series, and constructs a new SGVMD-ARIMA combination model in a non-linear combination way to predict financial time series. The ARIMA model, LSTM model, and other classic decomposition prediction models are used as control models to compare the accuracy of different models. The empirical results indicate that the constructed combination prediction model has universal advantages over the single prediction model and linear combination prediction model of the control group. Within the prediction interval, our proposed combination model has improved advantages over traditional decomposition prediction control group models.
Related papers
- BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges [55.2480439325792]
This paper introduces BreakGPT, a novel large language model (LLM) architecture adapted specifically for time series forecasting and the prediction of sharp upward movements in asset prices.
We showcase BreakGPT as a promising solution for financial forecasting with minimal training and as a strong competitor for capturing both local and global temporal dependencies.
arXiv Detail & Related papers (2024-11-09T05:40:32Z) - Conformal online model aggregation [29.43493007296859]
This paper proposes a new approach towards conformal model aggregation in online settings.
It is based on combining the prediction sets from several algorithms by voting, where weights on the models are adapted over time based on past performance.
arXiv Detail & Related papers (2024-03-22T15:40:06Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Local Bayesian Dirichlet mixing of imperfect models [0.0]
We study the ability of Bayesian model averaging and mixing techniques to mine nuclear masses.
We show that the global and local mixtures of models reach excellent performance on both prediction accuracy and uncertainty quantification.
arXiv Detail & Related papers (2023-11-02T21:02:40Z) - Attention-Based Ensemble Pooling for Time Series Forecasting [55.2480439325792]
We propose a method for pooling that performs a weighted average over candidate model forecasts.
We test this method on two time-series forecasting problems: multi-step forecasting of the dynamics of the non-stationary Lorenz 63 equation, and one-step forecasting of the weekly incident deaths due to COVID-19.
arXiv Detail & Related papers (2023-10-24T22:59:56Z) - pTSE: A Multi-model Ensemble Method for Probabilistic Time Series
Forecasting [10.441994923253596]
pTSE is a multi-model distribution ensemble method for probabilistic forecasting based on Hidden Markov Model (HMM)
We provide a complete theoretical analysis of pTSE to prove that the empirical distribution of time series subject to an HMM will converge to the stationary distribution almost surely.
Experiments on benchmarks show the superiority of pTSE overall member models and competitive ensemble methods.
arXiv Detail & Related papers (2023-05-16T07:00:57Z) - CAMERO: Consistency Regularized Ensemble of Perturbed Language Models
with Weight Sharing [83.63107444454938]
We propose a consistency-regularized ensemble learning approach based on perturbed models, named CAMERO.
Specifically, we share the weights of bottom layers across all models and apply different perturbations to the hidden representations for different models, which can effectively promote the model diversity.
Our experiments using large language models demonstrate that CAMERO significantly improves the generalization performance of the ensemble model.
arXiv Detail & Related papers (2022-04-13T19:54:51Z) - Model-Attentive Ensemble Learning for Sequence Modeling [86.4785354333566]
We present Model-Attentive Ensemble learning for Sequence modeling (MAES)
MAES is a mixture of time-series experts which leverages an attention-based gating mechanism to specialize the experts on different sequence dynamics and adaptively weight their predictions.
We demonstrate that MAES significantly out-performs popular sequence models on datasets subject to temporal shift.
arXiv Detail & Related papers (2021-02-23T05:23:35Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.