TPLVM: Portfolio Construction by Student's $t$-process Latent Variable
Model
- URL: http://arxiv.org/abs/2002.06243v1
- Date: Wed, 29 Jan 2020 02:02:02 GMT
- Title: TPLVM: Portfolio Construction by Student's $t$-process Latent Variable
Model
- Authors: Yusuke Uchiyama, Kei Nakagawa
- Abstract summary: We propose the Student's $t$-process latent variable model (TPLVM) to describe non-Gaussian fluctuations of financial timeseries by lower dimensional latent variables.
By comparing these portfolios, we confirm the proposed portfolio outperforms that of the existing Gaussian process latent variable model.
- Score: 3.5408022972081694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimal asset allocation is a key topic in modern finance theory. To realize
the optimal asset allocation on investor's risk aversion, various portfolio
construction methods have been proposed. Recently, the applications of machine
learning are rapidly growing in the area of finance. In this article, we
propose the Student's $t$-process latent variable model (TPLVM) to describe
non-Gaussian fluctuations of financial timeseries by lower dimensional latent
variables. Subsequently, we apply the TPLVM to minimum-variance portfolio as an
alternative of existing nonlinear factor models. To test the performance of the
proposed portfolio, we construct minimum-variance portfolios of global stock
market indices based on the TPLVM or Gaussian process latent variable model. By
comparing these portfolios, we confirm the proposed portfolio outperforms that
of the existing Gaussian process latent variable model.
Related papers
- Conformal Predictive Portfolio Selection [10.470114319701576]
We propose a framework for predictive portfolio selection using conformal inference, called Conformal Predictive Portfolio Selection ( CPPS)
Our approach predicts future portfolio returns, computes corresponding prediction intervals, and selects the desirable portfolio based on these intervals.
We demonstrate the effectiveness of our CPPS framework using an AR model and validate its performance through empirical studies.
arXiv Detail & Related papers (2024-10-19T15:42:49Z) - VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment [66.80143024475635]
We propose VinePPO, a straightforward approach to compute unbiased Monte Carlo-based estimates.
We show that VinePPO consistently outperforms PPO and other RL-free baselines across MATH and GSM8K datasets.
arXiv Detail & Related papers (2024-10-02T15:49:30Z) - Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models [79.41139393080736]
Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities.
We propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning.
arXiv Detail & Related papers (2024-09-30T10:48:20Z) - AAPM: Large Language Model Agent-based Asset Pricing Models [4.326886488307076]
We propose a novel asset pricing approach, which fuses qualitative discretionary investment analysis from LLM agents and quantitative manual financial economic factors.
The experimental results show that our approach outperforms machine learning-based asset pricing baselines in portfolio optimization and asset pricing errors.
arXiv Detail & Related papers (2024-09-25T18:27:35Z) - Hedge Fund Portfolio Construction Using PolyModel Theory and iTransformer [1.4061979259370274]
We implement the PolyModel theory for constructing a hedge fund portfolio.
We create quantitative measures such as Long-term Alpha, Long-term Ratio, and SVaR.
We also employ the latest deep learning techniques (iTransformer) to capture the upward trend.
arXiv Detail & Related papers (2024-08-06T17:55:58Z) - Graph-Structured Speculative Decoding [52.94367724136063]
Speculative decoding has emerged as a promising technique to accelerate the inference of Large Language Models.
We introduce an innovative approach utilizing a directed acyclic graph (DAG) to manage the drafted hypotheses.
We observe a remarkable speedup of 1.73$times$ to 1.96$times$, significantly surpassing standard speculative decoding.
arXiv Detail & Related papers (2024-07-23T06:21:24Z) - Deep Reinforcement Learning and Mean-Variance Strategies for Responsible Portfolio Optimization [49.396692286192206]
We study the use of deep reinforcement learning for responsible portfolio optimization by incorporating ESG states and objectives.
Our results show that deep reinforcement learning policies can provide competitive performance against mean-variance approaches for responsible portfolio allocation.
arXiv Detail & Related papers (2024-03-25T12:04:03Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Sparse Index Tracking: Simultaneous Asset Selection and Capital Allocation via $\ell_0$-Constrained Portfolio [7.5684339230894135]
A sparse portfolio is preferable to a full portfolio in terms of reducing transaction costs and avoiding illiquid assets.
We propose a new problem formulation of sparse index tracking using an $ell_p$-norm constraint.
Our approach offers a choice between constraints on portfolio and turnover sparsity, further reducing transaction costs by limiting asset updates at each rebalancing interval.
arXiv Detail & Related papers (2023-07-22T04:47:30Z) - Model-Augmented Q-learning [112.86795579978802]
We propose a MFRL framework that is augmented with the components of model-based RL.
Specifically, we propose to estimate not only the $Q$-values but also both the transition and the reward with a shared network.
We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.
arXiv Detail & Related papers (2021-02-07T17:56:50Z) - Deep Learning for Portfolio Optimization [5.833272638548154]
Instead of selecting individual assets, we trade Exchange-Traded Funds (ETFs) of market indices to form a portfolio.
We compare our method with a wide range of algorithms with results showing that our model obtains the best performance over the testing period.
arXiv Detail & Related papers (2020-05-27T21:28:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.