CNN-based Realized Covariance Matrix Forecasting
- URL: http://arxiv.org/abs/2107.10602v1
- Date: Thu, 22 Jul 2021 12:02:24 GMT
- Title: CNN-based Realized Covariance Matrix Forecasting
- Authors: Yanwen Fang, Philip L. H. Yu, Yaohua Tang
- Abstract summary: We propose an end-to-end trainable model built on the CNN and Conal LSTM (ConvLSTM)
It focuses on local structures and correlations and learns a nonlinear mapping that connect the historical realized covariance matrices to the future one.
Our empirical studies on synthetic and real-world datasets demonstrate its excellent forecasting ability compared with several advanced volatility models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is well known that modeling and forecasting realized covariance matrices
of asset returns play a crucial role in the field of finance. The availability
of high frequency intraday data enables the modeling of the realized covariance
matrices directly. However, most of the models available in the literature
depend on strong structural assumptions and they often suffer from the curse of
dimensionality. We propose an end-to-end trainable model built on the CNN and
Convolutional LSTM (ConvLSTM) which does not require to make any distributional
or structural assumption but could handle high-dimensional realized covariance
matrices consistently. The proposed model focuses on local structures and
spatiotemporal correlations. It learns a nonlinear mapping that connect the
historical realized covariance matrices to the future one. Our empirical
studies on synthetic and real-world datasets demonstrate its excellent
forecasting ability compared with several advanced volatility models.
Related papers
- Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series [14.400596021890863]
Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series.
We propose the Amortized Control of continuous State Space Model (ACSSM) for continuous dynamical modeling of time series.
arXiv Detail & Related papers (2024-10-08T01:27:46Z) - Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Latent mixed-effect models for high-dimensional longitudinal data [6.103940626659986]
We propose LMM-VAE, a scalable, interpretable and identifiable model for longitudinal data.
We highlight theoretical connections between it and GP-based techniques, providing a unified framework for this class of methods.
arXiv Detail & Related papers (2024-09-17T09:16:38Z) - Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion [2.8948274245812335]
We investigate the implicit regularization of matrix factorization for solving matrix completion problems.
We empirically discover that the connectivity of observed data plays a crucial role in the implicit bias.
Our work reveals the intricate interplay between data connectivity, training dynamics, and implicit regularization in matrix factorization models.
arXiv Detail & Related papers (2024-05-22T15:12:14Z) - Learning Car-Following Behaviors Using Bayesian Matrix Normal Mixture Regression [17.828808886958736]
Car-following (CF) behaviors are crucial for microscopic traffic simulation.
Many data-driven methods, despite their robustness, operate as "black boxes" with limited interpretability.
This work introduces a Bayesian Matrix Normal Mixture Regression (MNMR) model that simultaneously captures feature correlations and temporal dynamics inherent in CF behaviors.
arXiv Detail & Related papers (2024-04-24T17:55:47Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z) - Predicting Multidimensional Data via Tensor Learning [0.0]
We develop a model that retains the intrinsic multidimensional structure of the dataset.
To estimate the model parameters, an Alternating Least Squares algorithm is developed.
The proposed model is able to outperform benchmark models present in the forecasting literature.
arXiv Detail & Related papers (2020-02-11T11:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.