MAPS: Multi-agent Reinforcement Learning-based Portfolio Management
System
- URL: http://arxiv.org/abs/2007.05402v1
- Date: Fri, 10 Jul 2020 14:08:12 GMT
- Title: MAPS: Multi-agent Reinforcement Learning-based Portfolio Management
System
- Authors: Jinho Lee, Raehyun Kim, Seok-Won Yi, Jaewoo Kang
- Abstract summary: We propose the Multi-Agent reinforcement learning-based Portfolio management System (MAPS)
MAPS is a cooperative system in which each agent is an independent "investor" creating its own portfolio.
Experiment results with 12 years of US market data show that MAPS outperforms most of the baselines in terms of Sharpe ratio.
- Score: 23.657021288146158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating an investment strategy using advanced deep learning methods in
stock markets has recently been a topic of interest. Most existing deep
learning methods focus on proposing an optimal model or network architecture by
maximizing return. However, these models often fail to consider and adapt to
the continuously changing market conditions. In this paper, we propose the
Multi-Agent reinforcement learning-based Portfolio management System (MAPS).
MAPS is a cooperative system in which each agent is an independent "investor"
creating its own portfolio. In the training procedure, each agent is guided to
act as diversely as possible while maximizing its own return with a carefully
designed loss function. As a result, MAPS as a system ends up with a
diversified portfolio. Experiment results with 12 years of US market data show
that MAPS outperforms most of the baselines in terms of Sharpe ratio.
Furthermore, our results show that adding more agents to our system would allow
us to get a higher Sharpe ratio by lowering risk with a more diversified
portfolio.
Related papers
- Markowitz Meets Bellman: Knowledge-distilled Reinforcement Learning for Portfolio Management [7.199922073535674]
This paper introduces a hybrid approach combining Markowitz's portfolio theory with reinforcement learning.
In particular, our proposed method, called KDD (Knowledge Distillation DDPG), consist of two training stages: supervised and reinforcement learning stages.
A comparative analysis against standard financial models and AI frameworks, using metrics like returns, the Sharpe ratio, and nine evaluation indices, reveals our model's superiority.
arXiv Detail & Related papers (2024-05-08T22:54:04Z) - Combining Transformer based Deep Reinforcement Learning with
Black-Litterman Model for Portfolio Optimization [0.0]
As a model-free algorithm, deep reinforcement learning (DRL) agent learns and makes decisions by interacting with the environment in an unsupervised way.
We propose a hybrid portfolio optimization model combining the DRL agent and the Black-Litterman (BL) model.
Our DRL agent significantly outperforms various comparison portfolio choice strategies and alternative DRL frameworks by at least 42% in terms of accumulated return.
arXiv Detail & Related papers (2024-02-23T16:01:37Z) - Reinforcement Learning with Maskable Stock Representation for Portfolio
Management in Customizable Stock Pools [34.97636568457075]
Portfolio management (PM) is a fundamental financial trading task, which explores the optimal periodical reallocation of capitals into different stocks to pursue long-term profits.
ExistingReinforcement learning (RL) methods require to retrain RL agents even with a tiny change of the stock pool, which leads to high computational cost and unstable performance.
We propose EarnMore to handle PM with CSPs through one-shot training in a global stock pool.
arXiv Detail & Related papers (2023-11-17T09:16:59Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Learning From Good Trajectories in Offline Multi-Agent Reinforcement
Learning [98.07495732562654]
offline multi-agent reinforcement learning (MARL) aims to learn effective multi-agent policies from pre-collected datasets.
One agent learned by offline MARL often inherits this random policy, jeopardizing the performance of the entire team.
We propose a novel framework called Shared Individual Trajectories (SIT) to address this problem.
arXiv Detail & Related papers (2022-11-28T18:11:26Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - Softmax with Regularization: Better Value Estimation in Multi-Agent
Reinforcement Learning [72.28520951105207]
Overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning.
We propose a novel regularization-based update scheme that penalizes large joint action-values deviating from a baseline.
We show that our method provides a consistent performance improvement on a set of challenging StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2021-03-22T14:18:39Z) - A Modularized and Scalable Multi-Agent Reinforcement Learning-based
System for Financial Portfolio Management [7.6146285961466]
Financial Portfolio Management is one of the most applicable problems in Reinforcement Learning (RL)
MSPM is a novel Multi-agent Reinforcement learning-based system with a modularized and scalable architecture for portfolio management.
Experiments on 8-year U.S. stock markets data prove the effectiveness of MSPM in profits accumulation by its outperformance over existing benchmarks.
arXiv Detail & Related papers (2021-02-06T04:04:57Z) - Deep reinforcement learning for portfolio management based on the
empirical study of chinese stock market [3.5952664589125916]
This paper is to verify that current cutting-edge technology, deep reinforcement learning, can be applied to portfolio management.
In experiments, we use our model in several randomly selected portfolios which include CSI300 that represents the market's rate of return and the randomly selected constituents of CSI500.
arXiv Detail & Related papers (2020-12-26T16:25:20Z) - MAGNet: Multi-agent Graph Network for Deep Multi-agent Reinforcement
Learning [70.540936204654]
We propose a novel approach, called MAGnet, to multi-agent reinforcement learning.
We show that it significantly outperforms state-of-the-art MARL solutions.
arXiv Detail & Related papers (2020-12-17T17:19:36Z) - Is Independent Learning All You Need in the StarCraft Multi-Agent
Challenge? [100.48692829396778]
Independent PPO (IPPO) is a form of independent learning in which each agent simply estimates its local value function.
IPPO's strong performance may be due to its robustness to some forms of environment non-stationarity.
arXiv Detail & Related papers (2020-11-18T20:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.