Graphical Models for Financial Time Series and Portfolio Selection
- URL: http://arxiv.org/abs/2101.09214v1
- Date: Fri, 22 Jan 2021 16:56:54 GMT
- Title: Graphical Models for Financial Time Series and Portfolio Selection
- Authors: Ni Zhan, Yijia Sun, Aman Jakhar, He Liu
- Abstract summary: We use PCA-KMeans, autoencoders, dynamic clustering, and structural learning to construct optimal portfolios.
This work suggests that graphical models can effectively learn the temporal dependencies in time series data and are proved useful in asset management.
- Score: 3.444844635251667
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We examine a variety of graphical models to construct optimal portfolios.
Graphical models such as PCA-KMeans, autoencoders, dynamic clustering, and
structural learning can capture the time varying patterns in the covariance
matrix and allow the creation of an optimal and robust portfolio. We compared
the resulting portfolios from the different models with baseline methods. In
many cases our graphical strategies generated steadily increasing returns with
low risk and outgrew the S&P 500 index. This work suggests that graphical
models can effectively learn the temporal dependencies in time series data and
are proved useful in asset management.
Related papers
- RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models [60.596005921295806]
Agglomerative models have emerged as a powerful approach to training vision foundation models.
We identify critical challenges including resolution mode shifts, teacher imbalance, idiosyncratic teacher artifacts, and an excessive number of output tokens.
We propose several novel solutions: multi-resolution training, mosaic augmentation, and improved balancing of teacher loss functions.
arXiv Detail & Related papers (2024-12-10T17:06:41Z) - Conformal Predictive Portfolio Selection [10.470114319701576]
We propose a framework for predictive portfolio selection via conformal prediction.
Our approach forecasts future portfolio returns, computes the corresponding prediction intervals, and selects the portfolio of interest based on these intervals.
We demonstrate the effectiveness of the CPPS framework by applying it to an AR model and validate its performance through empirical studies.
arXiv Detail & Related papers (2024-10-19T15:42:49Z) - Graph Neural Alchemist: An innovative fully modular architecture for time series-to-graph classification [0.0]
This paper introduces a novel Graph Neural Network (GNN) architecture for time series classification.
By representing time series as visibility graphs, it is possible to encode both temporal dependencies inherent to time series data.
Our architecture is fully modular, enabling flexible experimentation with different models.
arXiv Detail & Related papers (2024-10-12T00:03:40Z) - Hedge Fund Portfolio Construction Using PolyModel Theory and iTransformer [1.4061979259370274]
We implement the PolyModel theory for constructing a hedge fund portfolio.
We create quantitative measures such as Long-term Alpha, Long-term Ratio, and SVaR.
We also employ the latest deep learning techniques (iTransformer) to capture the upward trend.
arXiv Detail & Related papers (2024-08-06T17:55:58Z) - Large-scale Time-Varying Portfolio Optimisation using Graph Attention Networks [4.2056926734482065]
This study utilise 30 years of data on mid-cap firms, creating graphs of firms using distance correlation and the Triangulated Maximally Filtered Graph approach.
We show that the portfolio produced by the GAT-based model outperforms all benchmarks and is consistently superior to other strategies over a long period.
arXiv Detail & Related papers (2024-07-22T10:50:47Z) - Sequential Modeling Enables Scalable Learning for Large Vision Models [120.91839619284431]
We introduce a novel sequential modeling approach which enables learning a Large Vision Model (LVM) without making use of any linguistic data.
We define a common format, "visual sentences", in which we can represent raw images and videos as well as annotated data sources.
arXiv Detail & Related papers (2023-12-01T18:59:57Z) - Deep incremental learning models for financial temporal tabular datasets
with distribution shifts [0.9790236766474201]
The framework uses a simple basic building block (decision trees) to build self-similar models of any required complexity.
We demonstrate our scheme using XGBoost models trained on the Numerai dataset and show that a two layer deep ensemble of XGBoost models over different model snapshots delivers high quality predictions.
arXiv Detail & Related papers (2023-03-14T14:10:37Z) - Learning Gaussian Graphical Models with Latent Confounders [74.72998362041088]
We compare and contrast two strategies for inference in graphical models with latent confounders.
While these two approaches have similar goals, they are motivated by different assumptions about confounding.
We propose a new method, which combines the strengths of these two approaches.
arXiv Detail & Related papers (2021-05-14T00:53:03Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.