Deep Learning Enhanced Multivariate GARCH
- URL: http://arxiv.org/abs/2506.02796v1
- Date: Tue, 03 Jun 2025 12:22:57 GMT
- Title: Deep Learning Enhanced Multivariate GARCH
- Authors: Haoyuan Wang, Chen Liu, Minh-Ngoc Tran, Chao Wang,
- Abstract summary: Long Short-Term Memory enhanced BEKK (LSTM-BEKK) integrates deep learning into multivariate GARCH processes.<n>Our approach is designed to better capture nonlinear, dynamic, and high-dimensional dependence structures in financial return data.<n> Empirical results across multiple equity markets confirm that the LSTM-BEKK model achieves superior performance in terms of out-of-sample portfolio risk forecast.
- Score: 7.475786051454157
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper introduces a novel multivariate volatility modeling framework, named Long Short-Term Memory enhanced BEKK (LSTM-BEKK), that integrates deep learning into multivariate GARCH processes. By combining the flexibility of recurrent neural networks with the econometric structure of BEKK models, our approach is designed to better capture nonlinear, dynamic, and high-dimensional dependence structures in financial return data. The proposed model addresses key limitations of traditional multivariate GARCH-based methods, particularly in capturing persistent volatility clustering and asymmetric co-movement across assets. Leveraging the data-driven nature of LSTMs, the framework adapts effectively to time-varying market conditions, offering improved robustness and forecasting performance. Empirical results across multiple equity markets confirm that the LSTM-BEKK model achieves superior performance in terms of out-of-sample portfolio risk forecast, while maintaining the interpretability from the BEKK models. These findings highlight the potential of hybrid econometric-deep learning models in advancing financial risk management and multivariate volatility forecasting.
Related papers
- FUDOKI: Discrete Flow-based Unified Understanding and Generation via Kinetic-Optimal Velocities [76.46448367752944]
multimodal large language models (MLLMs) unify visual understanding and image generation within a single framework.<n>Most existing MLLMs rely on autore (AR) architectures, which impose inherent limitations on future development.<n>We introduce FUDOKI, a unified multimodal model purely based on discrete flow matching.
arXiv Detail & Related papers (2025-05-26T15:46:53Z) - Enhancing Black-Litterman Portfolio via Hybrid Forecasting Model Combining Multivariate Decomposition and Noise Reduction [13.04801847533423]
We propose a novel hybrid forecasting model SSA-MAEMD-TCN to automate and improve the view generation process.<n> Empirical tests on the Nasdaq 100 Index stocks show a significant improvement in forecasting performance compared to baseline models.<n>The optimized portfolio performs well, with annualized returns and Sharpe ratios far exceeding those of the traditional portfolio.
arXiv Detail & Related papers (2025-05-03T10:52:57Z) - BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges [55.2480439325792]
This paper introduces BreakGPT, a novel large language model (LLM) architecture adapted specifically for time series forecasting and the prediction of sharp upward movements in asset prices.
We showcase BreakGPT as a promising solution for financial forecasting with minimal training and as a strong competitor for capturing both local and global temporal dependencies.
arXiv Detail & Related papers (2024-11-09T05:40:32Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - Deep Learning Enhanced Realized GARCH [6.211385208178938]
We propose a new approach to volatility modeling by combining deep learning (LSTM) and realized volatility measures.
This LSTM-enhanced realized GARCH framework incorporates and distills modeling advances from financial econometrics, high frequency trading data and deep learning.
arXiv Detail & Related papers (2023-02-16T00:20:43Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Forecasting High-Dimensional Covariance Matrices of Asset Returns with
Hybrid GARCH-LSTMs [0.0]
This paper investigates the ability of hybrid models, mixing GARCH processes and neural networks, to forecast covariance matrices of asset returns.
The new model proposed is very promising as it not only outperforms the equally weighted portfolio, but also by a significant margin its econometric counterpart.
arXiv Detail & Related papers (2021-08-25T23:41:43Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Parsimonious Quantile Regression of Financial Asset Tail Dynamics via
Sequential Learning [35.34574502348672]
We propose a parsimonious quantile regression framework to learn the dynamic tail behaviors of financial asset returns.
Our model captures well both the time-varying characteristic and the asymmetrical heavy-tail property of financial time series.
arXiv Detail & Related papers (2020-10-16T09:35:52Z) - A Bayesian Long Short-Term Memory Model for Value at Risk and Expected
Shortfall Joint Forecasting [26.834110647177965]
Value-at-Risk (VaR) and Expected Shortfall (ES) are widely used in the financial sector to measure the market risk and manage the extreme market movement.
Recent link between the quantile score function and the Asymmetric Laplace density has led to a flexible likelihood-based framework for joint modelling of VaR and ES.
We develop a hybrid model that is based on the Asymmetric Laplace quasi-likelihood and employs the Long Short-Term Memory (LSTM) time series modelling technique from Machine Learning to capture efficiently the underlying dynamics of VaR and ES.
arXiv Detail & Related papers (2020-01-23T05:13:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.