Deep Deterministic Portfolio Optimization
- URL: http://arxiv.org/abs/2003.06497v2
- Date: Thu, 9 Apr 2020 10:56:24 GMT
- Title: Deep Deterministic Portfolio Optimization
- Authors: Ayman Chaouki, Stephen Hardiman, Christian Schmidt, Emmanuel
S\'eri\'e, and Joachim de Lataillade
- Abstract summary: This work is to test reinforcement learning algorithms on conceptually simple, but mathematically non-trivial, trading environments.
We study the deep deterministic policy gradient algorithm and show that such a reinforcement learning agent can successfully recover the essential features of the optimal trading strategies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Can deep reinforcement learning algorithms be exploited as solvers for
optimal trading strategies? The aim of this work is to test reinforcement
learning algorithms on conceptually simple, but mathematically non-trivial,
trading environments. The environments are chosen such that an optimal or
close-to-optimal trading strategy is known. We study the deep deterministic
policy gradient algorithm and show that such a reinforcement learning agent can
successfully recover the essential features of the optimal trading strategies
and achieve close-to-optimal rewards.
Related papers
- Deep Reinforcement Learning for Online Optimal Execution Strategies [49.1574468325115]
This paper tackles the challenge of learning non-Markovian optimal execution strategies in dynamic financial markets.
We introduce a novel actor-critic algorithm based on Deep Deterministic Policy Gradient (DDPG)
We show that our algorithm successfully approximates the optimal execution strategy.
arXiv Detail & Related papers (2024-10-17T12:38:08Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Satisficing Exploration for Deep Reinforcement Learning [26.73584163318647]
In complex environments that approach the vastness and scale of the real world, attaining optimal performance may in fact be an entirely intractable endeavor.
Recent work has leveraged tools from information theory to design agents that deliberately forgo optimal solutions in favor of sufficiently-satisfying or satisficing solutions.
We extend an agent that directly represents uncertainty over the optimal value function allowing it to both bypass the need for model-based planning and to learn satisficing policies.
arXiv Detail & Related papers (2024-07-16T21:28:03Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Robust Utility Optimization via a GAN Approach [3.74142789780782]
We propose a generative adversarial network (GAN) approach to solve robust utility optimization problems.
In particular, we model both the investor and the market by neural networks (NN) and train them in a mini-max zero-sum game.
arXiv Detail & Related papers (2024-03-22T14:36:39Z) - From Bandits Model to Deep Deterministic Policy Gradient, Reinforcement
Learning with Contextual Information [4.42532447134568]
In this study, we use two methods to overcome the issue with contextual information.
In order to investigate strategic trading in quantitative markets, we merged the earlier financial trading strategy known as constant proportion portfolio insurance ( CPPI) into deep deterministic policy gradient (DDPG)
The experimental results show that both methods can accelerate the progress of reinforcement learning to obtain the optimal solution.
arXiv Detail & Related papers (2023-10-01T11:25:20Z) - Reinforcement Learning for Credit Index Option Hedging [2.568904868787359]
In this paper, we focus on finding the optimal hedging strategy of a credit index option using reinforcement learning.
We take a practical approach, where the focus is on realism i.e. discrete time, transaction costs; even testing our policy on real market data.
arXiv Detail & Related papers (2023-07-19T09:03:41Z) - The Information Geometry of Unsupervised Reinforcement Learning [133.20816939521941]
Unsupervised skill discovery is a class of algorithms that learn a set of policies without access to a reward function.
We show that unsupervised skill discovery algorithms do not learn skills that are optimal for every possible reward function.
arXiv Detail & Related papers (2021-10-06T13:08:36Z) - Universal Trading for Order Execution with Oracle Policy Distillation [99.57416828489568]
We propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution.
We show that our framework can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information.
arXiv Detail & Related papers (2021-01-28T05:52:18Z) - Mixed Strategies for Robust Optimization of Unknown Objectives [93.8672371143881]
We consider robust optimization problems, where the goal is to optimize an unknown objective function against the worst-case realization of an uncertain parameter.
We design a novel sample-efficient algorithm GP-MRO, which sequentially learns about the unknown objective from noisy point evaluations.
GP-MRO seeks to discover a robust and randomized mixed strategy, that maximizes the worst-case expected objective value.
arXiv Detail & Related papers (2020-02-28T09:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.