DeepVARMA: A Hybrid Deep Learning and VARMA Model for Chemical Industry Index Forecasting
- URL: http://arxiv.org/abs/2404.17615v1
- Date: Fri, 26 Apr 2024 09:15:26 GMT
- Title: DeepVARMA: A Hybrid Deep Learning and VARMA Model for Chemical Industry Index Forecasting
- Authors: Xiang Li, Hu Yang,
- Abstract summary: This paper proposes a new prediction model: DeepVARMA, which combines LSTM and VARMAX models.
The experimental results show that the new model achieves the best prediction accuracy.
This study provides more accurate tools and methods for future development and scientific decision-making in the chemical industry.
- Score: 14.738606795298043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the chemical industry index is one of the important indicators to measure the development of the chemical industry, forecasting it is critical for understanding the economic situation and trends of the industry. Taking the multivariable nonstationary series-synthetic material index as the main research object, this paper proposes a new prediction model: DeepVARMA, and its variants Deep-VARMA-re and DeepVARMA-en, which combine LSTM and VARMAX models. The new model firstly uses the deep learning model such as the LSTM remove the trends of the target time series and also learn the representation of endogenous variables, and then uses the VARMAX model to predict the detrended target time series with the embeddings of endogenous variables, and finally combines the trend learned by the LSTM and dependency learned by the VARMAX model to obtain the final predictive values. The experimental results show that (1) the new model achieves the best prediction accuracy by combining the LSTM encoding of the exogenous variables and the VARMAX model. (2) In multivariate non-stationary series prediction, DeepVARMA uses a phased processing strategy to show higher adaptability and accuracy compared to the traditional VARMA model as well as the machine learning models LSTM, RF and XGBoost. (3) Compared with smooth sequence prediction, the traditional VARMA and VARMAX models fluctuate more in predicting non-smooth sequences, while DeepVARMA shows more flexibility and robustness. This study provides more accurate tools and methods for future development and scientific decision-making in the chemical industry.
Related papers
- Recursive Learning of Asymptotic Variational Objectives [49.69399307452126]
General state-space models (SSMs) are widely used in statistical machine learning and are among the most classical generative models for sequential time-series data.
Online sequential IWAE (OSIWAE) allows for online learning of both model parameters and a Markovian recognition model for inferring latent states.
This approach is more theoretically well-founded than recently proposed online variational SMC methods.
arXiv Detail & Related papers (2024-11-04T16:12:37Z) - Remaining Useful Life Prediction: A Study on Multidimensional Industrial Signal Processing and Efficient Transfer Learning Based on Large Language Models [6.118896920507198]
This paper introduces an innovative regression framework utilizing large language models (LLMs) for RUL prediction.
Experiments on the Turbofan engine's RUL prediction task show that the proposed model surpasses state-of-the-art (SOTA) methods.
With minimal target domain data for fine-tuning, the model outperforms SOTA methods trained on full target domain data.
arXiv Detail & Related papers (2024-10-04T04:21:53Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - Advanced Deep Regression Models for Forecasting Time Series Oil
Production [3.1383147856975633]
This research aims to develop advanced data-driven regression models using sequential convolutions and long short-term memory (LSTM) units.
It reveals that the LSTM-based sequence learning model can predict oil production better than the 1-D convolutional neural network (CNN) with mean absolute error (MAE) and R2 score of 111.16 and 0.98, respectively.
arXiv Detail & Related papers (2023-08-30T15:54:06Z) - Differential Evolution Algorithm based Hyper-Parameters Selection of
Transformer Neural Network Model for Load Forecasting [0.0]
Transformer models have the potential to improve Load forecasting because of their ability to learn long-range dependencies derived from their Attention Mechanism.
Our work compares the proposed Transformer based Neural Network model integrated with different metaheuristic algorithms by their performance in Load forecasting based on numerical metrics such as Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE)
arXiv Detail & Related papers (2023-07-28T04:29:53Z) - G-NM: A Group of Numerical Time Series Prediction Models [0.0]
Group of Numerical Time Series Prediction Model (G-NM) encapsulates both linear and non-linear dependencies, seasonalities, and trends present in time series data.
G-NM is explicitly constructed to augment our predictive capabilities related to patterns and trends inherent in complex natural phenomena.
arXiv Detail & Related papers (2023-06-20T16:39:27Z) - Benchmarking Machine Learning Robustness in Covid-19 Genome Sequence
Classification [109.81283748940696]
We introduce several ways to perturb SARS-CoV-2 genome sequences to mimic the error profiles of common sequencing platforms such as Illumina and PacBio.
We show that some simulation-based approaches are more robust (and accurate) than others for specific embedding methods to certain adversarial attacks to the input sequences.
arXiv Detail & Related papers (2022-07-18T19:16:56Z) - Bayesian Active Learning for Discrete Latent Variable Models [19.852463786440122]
Active learning seeks to reduce the amount of data required to fit the parameters of a model.
latent variable models play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines.
arXiv Detail & Related papers (2022-02-27T19:07:12Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.