A Learnheuristic Approach to A Constrained Multi-Objective Portfolio
Optimisation Problem
- URL: http://arxiv.org/abs/2304.06675v1
- Date: Thu, 13 Apr 2023 17:05:45 GMT
- Title: A Learnheuristic Approach to A Constrained Multi-Objective Portfolio
Optimisation Problem
- Authors: Sonia Bullah and Terence L. van Zyl
- Abstract summary: This paper studies multi-objective portfolio optimisation.
It aims to achieve the objective of maximising the expected return while minimising the risk of a given rate of return.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-objective portfolio optimisation is a critical problem researched
across various fields of study as it achieves the objective of maximising the
expected return while minimising the risk of a given portfolio at the same
time. However, many studies fail to include realistic constraints in the model,
which limits practical trading strategies. This study introduces realistic
constraints, such as transaction and holding costs, into an optimisation model.
Due to the non-convex nature of this problem, metaheuristic algorithms, such as
NSGA-II, R-NSGA-II, NSGA-III and U-NSGA-III, will play a vital role in solving
the problem. Furthermore, a learnheuristic approach is taken as surrogate
models enhance the metaheuristics employed. These algorithms are then compared
to the baseline metaheuristic algorithms, which solve a constrained,
multi-objective optimisation problem without using learnheuristics. The results
of this study show that, despite taking significantly longer to run to
completion, the learnheuristic algorithms outperform the baseline algorithms in
terms of hypervolume and rate of convergence. Furthermore, the backtesting
results indicate that utilising learnheuristics to generate weights for asset
allocation leads to a lower risk percentage, higher expected return and higher
Sharpe ratio than backtesting without using learnheuristics. This leads us to
conclude that using learnheuristics to solve a constrained, multi-objective
portfolio optimisation problem produces superior and preferable results than
solving the problem without using learnheuristics.
Related papers
- Neural Active Learning Beyond Bandits [69.99592173038903]
We study both stream-based and pool-based active learning with neural network approximations.
We propose two algorithms based on the newly designed exploitation and exploration neural networks for stream-based and pool-based active learning.
arXiv Detail & Related papers (2024-04-18T21:52:14Z) - Multi-Objective Reinforcement Learning-based Approach for Pressurized Water Reactor Optimization [0.0]
PEARL distinguishes itself from traditional policy-based multi-objective Reinforcement Learning methods by learning a single policy.
Several versions inspired from deep learning and evolutionary techniques have been crafted, catering to both unconstrained and constrained problem domains.
It is tested on two practical PWR core Loading Pattern optimization problems to showcase its real-world applicability.
arXiv Detail & Related papers (2023-12-15T20:41:09Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - An intelligent algorithmic trading based on a risk-return reinforcement
learning algorithm [0.0]
This scientific paper propose a novel portfolio optimization model using an improved deep reinforcement learning algorithm.
The proposed algorithm is based on actor-critic architecture, in which the main task of critical network is to learn the distribution of portfolio cumulative return.
A multi-process method is used, called Ape-x, to accelerate the speed of deep reinforcement learning training.
arXiv Detail & Related papers (2022-08-23T03:20:06Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - Runtime Analysis of Single- and Multi-Objective Evolutionary Algorithms for Chance Constrained Optimization Problems with Normally Distributed Random Variables [11.310502327308575]
We study the scenario of components that are independent and normally distributed.
We introduce a multi-objective formulation of the problem which trades off the expected cost and its variance.
We prove that this approach can also be used to compute a set of optimal solutions for the chance constrained minimum spanning tree problem.
arXiv Detail & Related papers (2021-09-13T09:24:23Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Reparameterized Variational Divergence Minimization for Stable Imitation [57.06909373038396]
We study the extent to which variations in the choice of probabilistic divergence may yield more performant ILO algorithms.
We contribute a re parameterization trick for adversarial imitation learning to alleviate the challenges of the promising $f$-divergence minimization framework.
Empirically, we demonstrate that our design choices allow for ILO algorithms that outperform baseline approaches and more closely match expert performance in low-dimensional continuous-control tasks.
arXiv Detail & Related papers (2020-06-18T19:04:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.