Forecasting High-Dimensional Covariance Matrices of Asset Returns with
Hybrid GARCH-LSTMs
- URL: http://arxiv.org/abs/2109.01044v1
- Date: Wed, 25 Aug 2021 23:41:43 GMT
- Title: Forecasting High-Dimensional Covariance Matrices of Asset Returns with
Hybrid GARCH-LSTMs
- Authors: Lucien Boulet
- Abstract summary: This paper investigates the ability of hybrid models, mixing GARCH processes and neural networks, to forecast covariance matrices of asset returns.
The new model proposed is very promising as it not only outperforms the equally weighted portfolio, but also by a significant margin its econometric counterpart.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several academics have studied the ability of hybrid models mixing univariate
Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models and
neural networks to deliver better volatility predictions than purely
econometric models. Despite presenting very promising results, the
generalization of such models to the multivariate case has yet to be studied.
Moreover, very few papers have examined the ability of neural networks to
predict the covariance matrix of asset returns, and all use a rather small
number of assets, thus not addressing what is known as the curse of
dimensionality. The goal of this paper is to investigate the ability of hybrid
models, mixing GARCH processes and neural networks, to forecast covariance
matrices of asset returns. To do so, we propose a new model, based on
multivariate GARCHs that decompose volatility and correlation predictions. The
volatilities are here forecast using hybrid neural networks while correlations
follow a traditional econometric process. After implementing the models in a
minimum variance portfolio framework, our results are as follows. First, the
addition of GARCH parameters as inputs is beneficial to the model proposed.
Second, the use of one-hot-encoding to help the neural network differentiate
between each stock improves the performance. Third, the new model proposed is
very promising as it not only outperforms the equally weighted portfolio, but
also by a significant margin its econometric counterpart that uses univariate
GARCHs to predict the volatilities.
Related papers
- Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - A Dynamic Approach to Stock Price Prediction: Comparing RNN and Mixture of Experts Models Across Different Volatility Profiles [0.0]
The MoE framework combines an RNN for volatile stocks and a linear model for stable stocks, dynamically adjusting the weight of each model through a gating network.
Results indicate that the MoE approach significantly improves predictive accuracy across different volatility profiles.
The MoE model's adaptability allows it to outperform each individual model, reducing errors such as Mean Squared Error (MSE) and Mean Absolute Error (MAE)
arXiv Detail & Related papers (2024-10-04T14:36:21Z) - GARCH-Informed Neural Networks for Volatility Prediction in Financial Markets [0.0]
We present a new, hybrid Deep Learning model that captures and forecasting market volatility more accurately than either class of models are capable of on their own.
When compared to other time series models, GINN showed superior out-of-sample prediction performance in terms of the Coefficient of Determination ($R2$), Mean Squared Error (MSE), and Mean Absolute Error (MAE)
arXiv Detail & Related papers (2024-09-30T23:53:54Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - A Statistics and Deep Learning Hybrid Method for Multivariate Time
Series Forecasting and Mortality Modeling [0.0]
Exponential Smoothing Recurrent Neural Network (ES-RNN) is a hybrid between a statistical forecasting model and a recurrent neural network variant.
ES-RNN achieves a 9.4% improvement in absolute error in the Makridakis-4 Forecasting Competition.
arXiv Detail & Related papers (2021-12-16T04:44:19Z) - Sparse MoEs meet Efficient Ensembles [49.313497379189315]
We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs)
We present Efficient Ensemble of Experts (E$3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble.
arXiv Detail & Related papers (2021-10-07T11:58:35Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Recurrent Conditional Heteroskedasticity [0.0]
We propose a new class of financial volatility models, called the REcurrent Conditional Heteroskedastic (RECH) models.
In particular, we incorporate auxiliary deterministic processes, governed by recurrent neural networks, into the conditional variance of the traditional conditional heteroskedastic models.
arXiv Detail & Related papers (2020-10-25T08:09:29Z) - Parsimonious Quantile Regression of Financial Asset Tail Dynamics via
Sequential Learning [35.34574502348672]
We propose a parsimonious quantile regression framework to learn the dynamic tail behaviors of financial asset returns.
Our model captures well both the time-varying characteristic and the asymmetrical heavy-tail property of financial time series.
arXiv Detail & Related papers (2020-10-16T09:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.