Robo-Advising: Enhancing Investment with Inverse Optimization and Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2105.09264v1
- Date: Wed, 19 May 2021 17:20:03 GMT
- Title: Robo-Advising: Enhancing Investment with Inverse Optimization and Deep
Reinforcement Learning
- Authors: Haoran Wang, Shi Yu
- Abstract summary: We propose a full-cycle data-driven investment robo-advising framework, consisting of two ML agents.
The proposed investment pipeline is applied on real market data from April 1, 2016 to February 1, 2021 and has shown to consistently outperform the S&P 500 benchmark portfolio.
- Score: 13.23731449431572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) has been embraced as a powerful tool by the financial
industry, with notable applications spreading in various domains including
investment management. In this work, we propose a full-cycle data-driven
investment robo-advising framework, consisting of two ML agents. The first
agent, an inverse portfolio optimization agent, infers an investor's risk
preference and expected return directly from historical allocation data using
online inverse optimization. The second agent, a deep reinforcement learning
(RL) agent, aggregates the inferred sequence of expected returns to formulate a
new multi-period mean-variance portfolio optimization problem that can be
solved using deep RL approaches. The proposed investment pipeline is applied on
real market data from April 1, 2016 to February 1, 2021 and has shown to
consistently outperform the S&P 500 benchmark portfolio that represents the
aggregate market optimal allocation. The outperformance may be attributed to
the the multi-period planning (versus single-period planning) and the
data-driven RL approach (versus classical estimation approach).
Related papers
- Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework [0.0]
This study presents a Reinforcement Learning-based portfolio management model tailored for high-risk environments.
We implement the model using the Soft Actor-Critic (SAC) agent with a Convolutional Neural Network with Multi-Head Attention.
Tested over two 16-month periods of varying market volatility, the model significantly outperformed benchmarks.
arXiv Detail & Related papers (2024-08-09T23:36:58Z) - Deep Reinforcement Learning and Mean-Variance Strategies for Responsible Portfolio Optimization [49.396692286192206]
We study the use of deep reinforcement learning for responsible portfolio optimization by incorporating ESG states and objectives.
Our results show that deep reinforcement learning policies can provide competitive performance against mean-variance approaches for responsible portfolio allocation.
arXiv Detail & Related papers (2024-03-25T12:04:03Z) - Developing A Multi-Agent and Self-Adaptive Framework with Deep Reinforcement Learning for Dynamic Portfolio Risk Management [1.2016264781280588]
A multi-agent reinforcement learning (RL) approach is proposed to balance the trade-off between the overall portfolio returns and their potential risks.
The obtained empirical results clearly reveal the potential strengths of our proposed MASA framework.
arXiv Detail & Related papers (2024-02-01T11:31:26Z) - HireVAE: An Online and Adaptive Factor Model Based on Hierarchical and
Regime-Switch VAE [113.47287249524008]
It is still an open question to build a factor model that can conduct stock prediction in an online and adaptive setting.
We propose the first deep learning based online and adaptive factor model, HireVAE, at the core of which is a hierarchical latent space that embeds the relationship between the market situation and stock-wise latent factors.
Across four commonly used real stock market benchmarks, the proposed HireVAE demonstrate superior performance in terms of active returns over previous methods.
arXiv Detail & Related papers (2023-06-05T12:58:13Z) - Dynamic Resource Allocation for Metaverse Applications with Deep
Reinforcement Learning [64.75603723249837]
This work proposes a novel framework to dynamically manage and allocate different types of resources for Metaverse applications.
We first propose an effective solution to divide applications into groups, namely MetaInstances, where common functions can be shared among applications.
Then, to capture the real-time, dynamic, and uncertain characteristics of request arrival and application departure processes, we develop a semi-Markov decision process-based framework.
arXiv Detail & Related papers (2023-02-27T00:30:01Z) - A Comparative Study of Hierarchical Risk Parity Portfolio and Eigen
Portfolio on the NIFTY 50 Stocks [1.5773159234875098]
This paper presents a systematic approach to portfolio optimization using two approaches, the hierarchical risk parity algorithm and the Eigen portfolio on seven sectors of the Indian stock market.
The backtesting results of the portfolios indicate that the performance of the HRP portfolio is superior to that of its counterpart on both training and test data for the majority of the sectors studied.
arXiv Detail & Related papers (2022-10-03T14:51:24Z) - Portfolio Optimization on NIFTY Thematic Sector Stocks Using an LSTM
Model [0.0]
This paper presents an algorithmic approach for designing optimum risk and eigen portfolios for five thematic sectors of the NSE of India.
The prices of the stocks are extracted from the web from Jan 1, 2016, to Dec 31, 2020.
An LSTM model is designed for predicting future stock prices.
Seven months after the portfolios were formed, on Aug 3, 2021, the actual returns of the portfolios are compared with the LSTM-predicted returns.
arXiv Detail & Related papers (2022-02-06T07:41:20Z) - FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven
Deep Reinforcement Learning in Quantitative Finance [58.77314662664463]
FinRL-Meta builds a universe of market environments for data-driven financial reinforcement learning.
First, FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy.
Second, FinRL-Meta provides hundreds of market environments for various trading tasks.
arXiv Detail & Related papers (2021-12-13T16:03:37Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z) - Deep Learning for Portfolio Optimization [5.833272638548154]
Instead of selecting individual assets, we trade Exchange-Traded Funds (ETFs) of market indices to form a portfolio.
We compare our method with a wide range of algorithms with results showing that our model obtains the best performance over the testing period.
arXiv Detail & Related papers (2020-05-27T21:28:43Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.