Simplex Decomposition for Portfolio Allocation Constraints in Reinforcement Learning
- URL: http://arxiv.org/abs/2404.10683v1
- Date: Tue, 16 Apr 2024 16:00:59 GMT
- Title: Simplex Decomposition for Portfolio Allocation Constraints in Reinforcement Learning
- Authors: David Winkel, Niklas Strauß, Matthias Schubert, Thomas Seidl,
- Abstract summary: We propose a novel approach to handle allocation constraints based on a decomposition of the constraint action space into a set of unconstrained allocation problems.
We show that the action space of the task is equivalent to the decomposed action space, and introduce a new reinforcement learning (RL) approach CAOSD.
- Score: 4.1573460459258245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Portfolio optimization tasks describe sequential decision problems in which the investor's wealth is distributed across a set of assets. Allocation constraints are used to enforce minimal or maximal investments into particular subsets of assets to control for objectives such as limiting the portfolio's exposure to a certain sector due to environmental concerns. Although methods for constrained Reinforcement Learning (CRL) can optimize policies while considering allocation constraints, it can be observed that these general methods yield suboptimal results. In this paper, we propose a novel approach to handle allocation constraints based on a decomposition of the constraint action space into a set of unconstrained allocation problems. In particular, we examine this approach for the case of two constraints. For example, an investor may wish to invest at least a certain percentage of the portfolio into green technologies while limiting the investment in the fossil energy sector. We show that the action space of the task is equivalent to the decomposed action space, and introduce a new reinforcement learning (RL) approach CAOSD, which is built on top of the decomposition. The experimental evaluation on real-world Nasdaq-100 data demonstrates that our approach consistently outperforms state-of-the-art CRL benchmarks for portfolio optimization.
Related papers
- Autoregressive Policy Optimization for Constrained Allocation Tasks [4.316765170255551]
We propose a new method for constrained allocation tasks based on an autoregressive process to sequentially sample allocations for each entity.
In addition, we introduce a novel de-biasing mechanism to counter the initial bias caused by sequential sampling.
arXiv Detail & Related papers (2024-09-27T13:27:15Z) - Resilient Constrained Reinforcement Learning [87.4374430686956]
We study a class of constrained reinforcement learning (RL) problems in which multiple constraint specifications are not identified before study.
It is challenging to identify appropriate constraint specifications due to the undefined trade-off between the reward training objective and the constraint satisfaction.
We propose a new constrained RL approach that searches for policy and constraint specifications together.
arXiv Detail & Related papers (2023-12-28T18:28:23Z) - Sparse Index Tracking: Simultaneous Asset Selection and Capital Allocation via $\ell_0$-Constrained Portfolio [7.5684339230894135]
A sparse portfolio is preferable to a full portfolio in terms of reducing transaction costs and avoiding illiquid assets.
We propose a new problem formulation of sparse index tracking using an $ell_p$-norm constraint.
Our approach offers a choice between constraints on portfolio and turnover sparsity, further reducing transaction costs by limiting asset updates at each rebalancing interval.
arXiv Detail & Related papers (2023-07-22T04:47:30Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Quantization for decentralized learning under subspace constraints [61.59416703323886]
We consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints.
We propose and study an adaptive decentralized strategy where the agents employ differential randomized quantizers to compress their estimates.
The analysis shows that, under some general conditions on the quantization noise, the strategy is stable both in terms of mean-square error and average bit rate.
arXiv Detail & Related papers (2022-09-16T09:38:38Z) - Financial Index Tracking via Quantum Computing with Cardinality
Constraints [1.3854111346209868]
We demonstrate how to apply non-linear cardinality constraints, important for real-world asset management, to quantum portfolios.
We apply the methodology to create innovative problem index tracking portfolios.
arXiv Detail & Related papers (2022-08-24T08:59:19Z) - COptiDICE: Offline Constrained Reinforcement Learning via Stationary
Distribution Correction Estimation [73.17078343706909]
offline constrained reinforcement learning (RL) problem, in which the agent aims to compute a policy that maximizes expected return while satisfying given cost constraints, learning only from a pre-collected dataset.
We present an offline constrained RL algorithm that optimize the policy in the space of the stationary distribution.
Our algorithm, COptiDICE, directly estimates the stationary distribution corrections of the optimal policy with respect to returns, while constraining the cost upper bound, with the goal of yielding a cost-conservative policy for actual constraint satisfaction.
arXiv Detail & Related papers (2022-04-19T15:55:47Z) - Comparing Classical-Quantum Portfolio Optimization with Enhanced
Constraints [0.0]
We show how to add fundamental analysis to the portfolio optimization problem, adding in asset-specific and global constraints based on chosen balance sheet metrics.
We analyze the current state-of-the-art algorithms for solving such a problem using D-Wave's Quantum Processor and compare the quality of the solutions obtained to commercially-available optimization software.
arXiv Detail & Related papers (2022-03-09T17:46:32Z) - Towards Deployment-Efficient Reinforcement Learning: Lower Bound and
Optimality [141.89413461337324]
Deployment efficiency is an important criterion for many real-world applications of reinforcement learning (RL)
We propose a theoretical formulation for deployment-efficient RL (DE-RL) from an "optimization with constraints" perspective.
arXiv Detail & Related papers (2022-02-14T01:31:46Z) - Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks [59.419152768018506]
We show that any optimal policy necessarily satisfies the k-SP constraint.
We propose a novel cost function that penalizes the policy violating SP constraint, instead of completely excluding it.
Our experiments on MiniGrid, DeepMind Lab, Atari, and Fetch show that the proposed method significantly improves proximal policy optimization (PPO)
arXiv Detail & Related papers (2021-07-13T21:39:21Z) - A General Framework on Enhancing Portfolio Management with Reinforcement
Learning [3.6985496077087743]
Portfolio management concerns continuous reallocation of funds and assets across financial instruments to meet the desired returns to risk profile.
Deep reinforcement learning (RL) has gained increasing interest in portfolio management, where RL agents are trained base on financial data to optimize the asset reallocation process.
We propose a general RL framework for asset management that enables continuous asset weights, short selling and making decisions with relevant features.
arXiv Detail & Related papers (2019-11-26T23:41:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.