CNN-based Realized Covariance Matrix Forecasting
- URL: http://arxiv.org/abs/2107.10602v1
- Date: Thu, 22 Jul 2021 12:02:24 GMT
- Title: CNN-based Realized Covariance Matrix Forecasting
- Authors: Yanwen Fang, Philip L. H. Yu, Yaohua Tang
- Abstract summary: We propose an end-to-end trainable model built on the CNN and Conal LSTM (ConvLSTM)
It focuses on local structures and correlations and learns a nonlinear mapping that connect the historical realized covariance matrices to the future one.
Our empirical studies on synthetic and real-world datasets demonstrate its excellent forecasting ability compared with several advanced volatility models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is well known that modeling and forecasting realized covariance matrices
of asset returns play a crucial role in the field of finance. The availability
of high frequency intraday data enables the modeling of the realized covariance
matrices directly. However, most of the models available in the literature
depend on strong structural assumptions and they often suffer from the curse of
dimensionality. We propose an end-to-end trainable model built on the CNN and
Convolutional LSTM (ConvLSTM) which does not require to make any distributional
or structural assumption but could handle high-dimensional realized covariance
matrices consistently. The proposed model focuses on local structures and
spatiotemporal correlations. It learns a nonlinear mapping that connect the
historical realized covariance matrices to the future one. Our empirical
studies on synthetic and real-world datasets demonstrate its excellent
forecasting ability compared with several advanced volatility models.
Related papers
- Exploring Patterns Behind Sports [3.2838877620203935]
This paper presents a comprehensive framework for time series prediction using a hybrid model that combines ARIMA and LSTM.
The model incorporates feature engineering techniques, including embedding and PCA, to transform raw data into a lower-dimensional representation.
arXiv Detail & Related papers (2025-02-11T11:51:07Z) - A theoretical framework for overfitting in energy-based modeling [5.1337384597700995]
We investigate the impact of limited data on training pairwise energy-based models for inverse problems aimed at identifying interaction networks.
We dissect training trajectories across the eigenbasis of the coupling matrix, exploiting the independent evolution of eigenmodes.
We show that finite data corrections can be accurately modeled through random matrix theory calculations.
arXiv Detail & Related papers (2025-01-31T14:21:02Z) - Integrating Random Effects in Variational Autoencoders for Dimensionality Reduction of Correlated Data [9.990687944474738]
LMMVAE is a novel model which separates the classic VAE latent model into fixed and random parts.
It is shown to improve squared reconstruction error and negative likelihood loss significantly on unseen data.
arXiv Detail & Related papers (2024-12-22T07:20:17Z) - Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion [2.8948274245812335]
We investigate the implicit regularization of matrix factorization for solving matrix completion problems.
We empirically discover that the connectivity of observed data plays a crucial role in the implicit bias.
Our work reveals the intricate interplay between data connectivity, training dynamics, and implicit regularization in matrix factorization models.
arXiv Detail & Related papers (2024-05-22T15:12:14Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z) - Predicting Multidimensional Data via Tensor Learning [0.0]
We develop a model that retains the intrinsic multidimensional structure of the dataset.
To estimate the model parameters, an Alternating Least Squares algorithm is developed.
The proposed model is able to outperform benchmark models present in the forecasting literature.
arXiv Detail & Related papers (2020-02-11T11:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.