A Novel Experts Advice Aggregation Framework Using Deep Reinforcement
Learning for Portfolio Management
- URL: http://arxiv.org/abs/2212.14477v1
- Date: Thu, 29 Dec 2022 22:48:26 GMT
- Title: A Novel Experts Advice Aggregation Framework Using Deep Reinforcement
Learning for Portfolio Management
- Authors: MohammadAmin Fazli, Mahdi Lashkari, Hamed Taherkhani, Jafar Habibi
- Abstract summary: We propose a new method using experts signals and historical price data to feed into our reinforcement learning framework.
Our framework could gain 90 percent of the profit earned by the best expert.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Solving portfolio management problems using deep reinforcement learning has
been getting much attention in finance for a few years. We have proposed a new
method using experts signals and historical price data to feed into our
reinforcement learning framework. Although experts signals have been used in
previous works in the field of finance, as far as we know, it is the first time
this method, in tandem with deep RL, is used to solve the financial portfolio
management problem. Our proposed framework consists of a convolutional network
for aggregating signals, another convolutional network for historical price
data, and a vanilla network. We used the Proximal Policy Optimization algorithm
as the agent to process the reward and take action in the environment. The
results suggested that, on average, our framework could gain 90 percent of the
profit earned by the best expert.
Related papers
- A Deep Reinforcement Learning Framework For Financial Portfolio Management [3.186092314772714]
It is a portfolio management problem which is solved by deep learning techniques.
Three different instants are used to realize this framework, namely a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory.
We have successfully replicated the original paper, which achieve superior returns, but it doesn't perform as well when applied in the stock market.
arXiv Detail & Related papers (2024-09-03T20:11:04Z) - Markowitz Meets Bellman: Knowledge-distilled Reinforcement Learning for Portfolio Management [7.199922073535674]
This paper introduces a hybrid approach combining Markowitz's portfolio theory with reinforcement learning.
In particular, our proposed method, called KDD (Knowledge Distillation DDPG), consist of two training stages: supervised and reinforcement learning stages.
A comparative analysis against standard financial models and AI frameworks, using metrics like returns, the Sharpe ratio, and nine evaluation indices, reveals our model's superiority.
arXiv Detail & Related papers (2024-05-08T22:54:04Z) - Neural Active Learning Beyond Bandits [69.99592173038903]
We study both stream-based and pool-based active learning with neural network approximations.
We propose two algorithms based on the newly designed exploitation and exploration neural networks for stream-based and pool-based active learning.
arXiv Detail & Related papers (2024-04-18T21:52:14Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Learning to Learn Financial Networks for Optimising Momentum Strategies [14.049479722250835]
Network momentum provides a novel type of risk premium, which exploits the interconnections among assets in a financial network to predict future returns.
We propose L2GMOM, an end-to-end machine learning framework that simultaneously learns financial networks and optimises trading signals for network momentum strategies.
Backtesting on 64 continuous future contracts demonstrates a significant improvement in portfolio profitability and risk control, with a Sharpe ratio of 1.74 across a 20-year period.
arXiv Detail & Related papers (2023-08-23T15:51:29Z) - Optimizing Credit Limit Adjustments Under Adversarial Goals Using
Reinforcement Learning [42.303733194571905]
We seek to find and automatize an optimal credit card limit adjustment policy by employing reinforcement learning techniques.
Our research establishes a conceptual structure for applying reinforcement learning framework to credit limit adjustment.
arXiv Detail & Related papers (2023-06-27T16:10:36Z) - Anti-Concentrated Confidence Bonuses for Scalable Exploration [57.91943847134011]
Intrinsic rewards play a central role in handling the exploration-exploitation trade-off.
We introduce emphanti-concentrated confidence bounds for efficiently approximating the elliptical bonus.
We develop a practical variant for deep reinforcement learning that is competitive with contemporary intrinsic rewards on Atari benchmarks.
arXiv Detail & Related papers (2021-10-21T15:25:15Z) - Online Apprenticeship Learning [58.45089581278177]
In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function.
The goal is to find a policy that matches the expert's performance on some predefined set of cost functions.
We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms.
arXiv Detail & Related papers (2021-02-13T12:57:51Z) - Model-Augmented Q-learning [112.86795579978802]
We propose a MFRL framework that is augmented with the components of model-based RL.
Specifically, we propose to estimate not only the $Q$-values but also both the transition and the reward with a shared network.
We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.
arXiv Detail & Related papers (2021-02-07T17:56:50Z) - Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning [100.73223416589596]
We propose a cost-sensitive portfolio selection method with deep reinforcement learning.
Specifically, a novel two-stream portfolio policy network is devised to extract both price series patterns and asset correlations.
A new cost-sensitive reward function is developed to maximize the accumulated return and constrain both costs via reinforcement learning.
arXiv Detail & Related papers (2020-03-06T06:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.