Optimal Stopping via Randomized Neural Networks
- URL: http://arxiv.org/abs/2104.13669v4
- Date: Fri, 1 Dec 2023 11:10:59 GMT
- Title: Optimal Stopping via Randomized Neural Networks
- Authors: Calypso Herrera, Florian Krach, Pierre Ruyssen, Josef Teichmann
- Abstract summary: This paper presents the benefits of using randomized neural networks instead of standard basis functions or deep neural networks.
Our approaches are applicable to high dimensional problems where the existing approaches become increasingly impractical.
In all cases, our algorithms outperform the state-of-the-art and other relevant machine learning approaches in terms of time.
- Score: 6.677219861416146
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper presents the benefits of using randomized neural networks instead
of standard basis functions or deep neural networks to approximate the
solutions of optimal stopping problems. The key idea is to use neural networks,
where the parameters of the hidden layers are generated randomly and only the
last layer is trained, in order to approximate the continuation value. Our
approaches are applicable to high dimensional problems where the existing
approaches become increasingly impractical. In addition, since our approaches
can be optimized using simple linear regression, they are easy to implement and
theoretical guarantees can be provided. We test our approaches for American
option pricing on Black--Scholes, Heston and rough Heston models and for
optimally stopping a fractional Brownian motion. In all cases, our algorithms
outperform the state-of-the-art and other relevant machine learning approaches
in terms of computation time while achieving comparable results. Moreover, we
show that they can also be used to efficiently compute Greeks of American
options.
Related papers
- The Unreasonable Effectiveness of Solving Inverse Problems with Neural Networks [24.766470360665647]
We show that neural networks trained to learn solutions to inverse problems can find better solutions than classicals even on their training set.
Our findings suggest an alternative use for neural networks: rather than generalizing to new data for fast inference, they can also be used to find better solutions on known data.
arXiv Detail & Related papers (2024-08-15T12:38:10Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Improving Generalization of Deep Neural Networks by Optimum Shifting [33.092571599896814]
We propose a novel method called emphoptimum shifting, which changes the parameters of a neural network from a sharp minimum to a flatter one.
Our method is based on the observation that when the input and output of a neural network are fixed, the matrix multiplications within the network can be treated as systems of under-determined linear equations.
arXiv Detail & Related papers (2024-05-23T02:31:55Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Sample-Then-Optimize Batch Neural Thompson Sampling [50.800944138278474]
We introduce two algorithms for black-box optimization based on the Thompson sampling (TS) policy.
To choose an input query, we only need to train an NN and then choose the query by maximizing the trained NN.
Our algorithms sidestep the need to invert the large parameter matrix yet still preserve the validity of the TS policy.
arXiv Detail & Related papers (2022-10-13T09:01:58Z) - Neural Network Pruning Through Constrained Reinforcement Learning [3.2880869992413246]
We propose a general methodology for pruning neural networks.
Our proposed methodology can prune neural networks to respect pre-defined computational budgets.
We prove the effectiveness of our approach via comparison with state-of-the-art methods on standard image classification datasets.
arXiv Detail & Related papers (2021-10-16T11:57:38Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Fast semidefinite programming with feedforward neural networks [0.0]
We propose to solve feasibility semidefinite programs using artificial neural networks.
We train the network without having to exactly solve the semidefinite program even once.
We demonstrate that the trained neural network gives decent accuracy, while showing orders of magnitude increase in speed compared to a traditional solver.
arXiv Detail & Related papers (2020-11-11T14:01:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.