Neural Generalised AutoRegressive Conditional Heteroskedasticity
- URL: http://arxiv.org/abs/2202.11285v1
- Date: Wed, 23 Feb 2022 03:23:02 GMT
- Title: Neural Generalised AutoRegressive Conditional Heteroskedasticity
- Authors: Zexuan Yin and Paolo Barucca
- Abstract summary: We propose Neural GARCH, a class of methods to model conditional heteroskedasticity in financial time series.
We allow the coefficients of a GARCH model to be time varying in order to reflect the constantly changing dynamics of financial markets.
We propose two variants of our model, one with normal innovations and the other with Students t innovations.
We find that the Neural Students t model consistently outperforms the others.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose Neural GARCH, a class of methods to model conditional
heteroskedasticity in financial time series. Neural GARCH is a neural network
adaptation of the GARCH 1,1 model in the univariate case, and the diagonal BEKK
1,1 model in the multivariate case. We allow the coefficients of a GARCH model
to be time varying in order to reflect the constantly changing dynamics of
financial markets. The time varying coefficients are parameterised by a
recurrent neural network that is trained with stochastic gradient variational
Bayes. We propose two variants of our model, one with normal innovations and
the other with Students t innovations. We test our models on a wide range of
univariate and multivariate financial time series, and we find that the Neural
Students t model consistently outperforms the others.
Related papers
- GARCH-Informed Neural Networks for Volatility Prediction in Financial Markets [0.0]
We present a new, hybrid Deep Learning model that captures and forecasting market volatility more accurately than either class of models are capable of on their own.
When compared to other time series models, GINN showed superior out-of-sample prediction performance in terms of the Coefficient of Determination ($R2$), Mean Squared Error (MSE), and Mean Absolute Error (MAE)
arXiv Detail & Related papers (2024-09-30T23:53:54Z) - Copula Variational LSTM for High-dimensional Cross-market Multivariate
Dependence Modeling [46.75628526959982]
We make the first attempt to integrate variational sequential neural learning with copula-based dependence modeling.
Our variational neural network WPVC-VLSTM models variational sequential dependence degrees and structures across time series.
It outperforms benchmarks including linear models, volatility models, deep neural networks, and variational recurrent networks in cross-market portfolio forecasting.
arXiv Detail & Related papers (2023-05-09T08:19:08Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Variational Heteroscedastic Volatility Model [0.0]
We propose an end-to-end neural network architecture capable of modelling heteroscedastic behaviour in financial time series.
VHVM consists of a variational autoencoder to capture relationships between assets, and a recurrent neural network to model the time-evolution of these dependencies.
arXiv Detail & Related papers (2022-04-11T15:29:46Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Characterizing and overcoming the greedy nature of learning in
multi-modal deep neural networks [62.48782506095565]
We show that due to the greedy nature of learning in deep neural networks, models tend to rely on just one modality while under-fitting the other modalities.
We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning.
arXiv Detail & Related papers (2022-02-10T20:11:21Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Recurrent Conditional Heteroskedasticity [0.0]
We propose a new class of financial volatility models, called the REcurrent Conditional Heteroskedastic (RECH) models.
In particular, we incorporate auxiliary deterministic processes, governed by recurrent neural networks, into the conditional variance of the traditional conditional heteroskedastic models.
arXiv Detail & Related papers (2020-10-25T08:09:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.