A Deep Reinforcement Learning Framework For Financial Portfolio Management
- URL: http://arxiv.org/abs/2409.08426v1
- Date: Tue, 3 Sep 2024 20:11:04 GMT
- Title: A Deep Reinforcement Learning Framework For Financial Portfolio Management
- Authors: Jinyang Li,
- Abstract summary: It is a portfolio management problem which is solved by deep learning techniques.
Three different instants are used to realize this framework, namely a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory.
We have successfully replicated the original paper, which achieve superior returns, but it doesn't perform as well when applied in the stock market.
- Score: 3.186092314772714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this research paper, we investigate into a paper named "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem" [arXiv:1706.10059]. It is a portfolio management problem which is solved by deep learning techniques. The original paper proposes a financial-model-free reinforcement learning framework, which consists of the Ensemble of Identical Independent Evaluators (EIIE) topology, a Portfolio-Vector Memory (PVM), an Online Stochastic Batch Learning (OSBL) scheme, and a fully exploiting and explicit reward function. Three different instants are used to realize this framework, namely a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory (LSTM). The performance is then examined by comparing to a number of recently reviewed or published portfolio-selection strategies. We have successfully replicated their implementations and evaluations. Besides, we further apply this framework in the stock market, instead of the cryptocurrency market that the original paper uses. The experiment in the cryptocurrency market is consistent with the original paper, which achieve superior returns. But it doesn't perform as well when applied in the stock market.
Related papers
- CITER: Collaborative Inference for Efficient Large Language Model Decoding with Token-Level Routing [56.98081258047281]
CITER enables efficient collaboration between small and large language models (SLMs & LLMs) through a token-level routing strategy.
We formulate router training as a policy optimization, where the router receives rewards based on both the quality of predictions and the inference costs of generation.
Our experiments show that CITER reduces the inference costs while preserving high-quality generation, offering a promising solution for real-time and resource-constrained applications.
arXiv Detail & Related papers (2025-02-04T03:36:44Z) - Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey [93.72125112643596]
Next Token Prediction (NTP) is a versatile training objective for machine learning tasks across various modalities.
This survey introduces a comprehensive taxonomy that unifies both understanding and generation within multimodal learning.
The proposed taxonomy covers five key aspects: Multimodal tokenization, MMNTP model architectures, unified task representation, datasets & evaluation, and open challenges.
arXiv Detail & Related papers (2024-12-16T05:02:25Z) - Exact Certification of (Graph) Neural Networks Against Label Poisoning [50.87615167799367]
Machine learning models are vulnerable to label flipping, i.e., the adversarial modification (poisoning) of training labels to compromise performance.
We introduce an exact certification method, deriving both sample-wise and collective certificates.
Our work presents the first exact certificate to a poisoning attack ever derived for neural networks, which could be of independent interest.
arXiv Detail & Related papers (2024-11-30T17:05:12Z) - Trustworthy Machine Learning [57.08542102068706]
This textbook on Trustworthy Machine Learning (TML) covers a theoretical and technical background of four key topics in TML.
We discuss important classical and contemporary research papers of the aforementioned fields and uncover and connect their underlying intuitions.
arXiv Detail & Related papers (2023-10-12T11:04:17Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Model-Augmented Q-learning [112.86795579978802]
We propose a MFRL framework that is augmented with the components of model-based RL.
Specifically, we propose to estimate not only the $Q$-values but also both the transition and the reward with a shared network.
We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.
arXiv Detail & Related papers (2021-02-07T17:56:50Z) - Deep reinforcement learning for portfolio management based on the
empirical study of chinese stock market [3.5952664589125916]
This paper is to verify that current cutting-edge technology, deep reinforcement learning, can be applied to portfolio management.
In experiments, we use our model in several randomly selected portfolios which include CSI300 that represents the market's rate of return and the randomly selected constituents of CSI500.
arXiv Detail & Related papers (2020-12-26T16:25:20Z) - Detecting and adapting to crisis pattern with context based Deep
Reinforcement Learning [6.224519494738852]
We present an innovative DRL framework consisting in two sub-networks fed respectively with portfolio strategies past performances and standard deviations as well as additional contextual features.
Results on test set show this approach substantially over-performs traditional portfolio optimization methods like Markowitz and is able to detect and anticipate crisis like the current Covid one.
arXiv Detail & Related papers (2020-09-07T12:11:08Z) - Application of Deep Q-Network in Portfolio Management [7.525667739427784]
This paper introduces a strategy based on the classic Deep Reinforcement Learning algorithm, Deep Q-Network, for portfolio management in stock market.
It is a type of deep neural network which is optimized by Q Learning.
The profit of DQN algorithm is 30% more than the profit of other strategies.
arXiv Detail & Related papers (2020-03-13T16:20:51Z) - Using Reinforcement Learning in the Algorithmic Trading Problem [18.21650781888097]
Trading on the stock exchange is interpreted into a game with a Markov property consisting of states, actions, and rewards.
A system for trading the fixed volume of a financial instrument is proposed and experimentally tested.
arXiv Detail & Related papers (2020-02-26T14:30:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.