Deep Learning Methods for S Shaped Utility Maximisation with a Random Reference Point
- URL: http://arxiv.org/abs/2410.05524v1
- Date: Mon, 7 Oct 2024 22:07:59 GMT
- Title: Deep Learning Methods for S Shaped Utility Maximisation with a Random Reference Point
- Authors: Ashley Davey, Harry Zheng,
- Abstract summary: We develop several numerical methods for solving the problem using deep learning and duality methods.
We use deep learning methods to solve the associated Hamilton-Jacobi-Bellman equation for both the primal and dual problems.
We compare the solution of this non-concave problem to that of concavified utility, a random function depending on the benchmark, in both complete and incomplete markets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the portfolio optimisation problem where the terminal function is an S-shaped utility applied at the difference between the wealth and a random benchmark process. We develop several numerical methods for solving the problem using deep learning and duality methods. We use deep learning methods to solve the associated Hamilton-Jacobi-Bellman equation for both the primal and dual problems, and the adjoint equation arising from the stochastic maximum principle. We compare the solution of this non-concave problem to that of concavified utility, a random function depending on the benchmark, in both complete and incomplete markets. We give some numerical results for power and log utilities to show the accuracy of the suggested algorithms.
Related papers
- Hamilton-Jacobi Based Policy-Iteration via Deep Operator Learning [9.950128864603599]
We incorporate DeepONet with a recently developed policy scheme to numerically solve optimal control problems.
A notable feature of our approach is that once the neural network is trained, the solution to the optimal control problem and HJB equations can be inferred quickly.
arXiv Detail & Related papers (2024-06-16T12:53:17Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Multivariate Systemic Risk Measures and Computation by Deep Learning
Algorithms [63.03966552670014]
We discuss the key related theoretical aspects, with a particular focus on the fairness properties of primal optima and associated risk allocations.
The algorithms we provide allow for learning primals, optima for the dual representation and corresponding fair risk allocations.
arXiv Detail & Related papers (2023-02-02T22:16:49Z) - A deep learning method for solving stochastic optimal control problems driven by fully-coupled FBSDEs [1.0703175070560689]
We first transform the problem into a Stackelberg differential game problem (leader-follower problem)
We compute two examples of the investment-consumption problem solved through utility models.
The results of both examples demonstrate the effectiveness of our proposed algorithm.
arXiv Detail & Related papers (2022-04-12T13:31:19Z) - Learning the Markov Decision Process in the Sparse Gaussian Elimination [0.0]
We propose a learning-based approach for the sparse Gaussian Elimination.
We proposed some Q-Learning algorithms for the main modules of sparse solver.
arXiv Detail & Related papers (2021-09-30T08:56:39Z) - Numerical Solution of Stiff Ordinary Differential Equations with Random
Projection Neural Networks [0.0]
We propose a numerical scheme based on Random Projection Neural Networks (RPNN) for the solution of Ordinary Differential Equations (ODEs)
We show that our proposed scheme yields good numerical approximation accuracy without being affected by the stiffness, thus outperforming in same cases the textttode45 and textttode15s functions.
arXiv Detail & Related papers (2021-08-03T15:49:17Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Deep Learning for Constrained Utility Maximisation [0.0]
This paper proposes two algorithms for solving control problems with deep learning.
The first algorithm solves Markovian problems via the Hamilton Jacobi Bellman equation.
The second uses the full power of the duality method to solve non-Markovian problems.
arXiv Detail & Related papers (2020-08-26T18:40:57Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.