Pricing options on flow forwards by neural networks in Hilbert space
- URL: http://arxiv.org/abs/2202.11606v1
- Date: Thu, 17 Feb 2022 18:03:51 GMT
- Title: Pricing options on flow forwards by neural networks in Hilbert space
- Authors: Fred Espen Benth, Nils Detering, Luca Galimberti
- Abstract summary: We recast the pricing problem as an optimization problem in a Hilbert space of real-valued function on the positive real line.
This optimization problem is solved by facilitating a novel feedforward neural network architecture.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new methodology for pricing options on flow forwards by applying
infinite-dimensional neural networks. We recast the pricing problem as an
optimization problem in a Hilbert space of real-valued function on the positive
real line, which is the state space for the term structure dynamics. This
optimization problem is solved by facilitating a novel feedforward neural
network architecture designed for approximating continuous functions on the
state space. The proposed neural net is built upon the basis of the Hilbert
space. We provide an extensive case study that shows excellent numerical
efficiency, with superior performance over that of a classical neural net
trained on sampling the term structure curves.
Related papers
- Posterior Contraction for Sparse Neural Networks in Besov Spaces with Intrinsic Dimensionality [8.411295657303324]
This work establishes that sparse Bayesian neural networks achieve optimal posterior contraction rates over anisotropic Besov spaces and their hierarchical compositions.<n>We show that these priors enable rate adaptation, allowing the posterior to contract at the optimal rate even when the smoothness level of the true function is unknown.
arXiv Detail & Related papers (2025-06-23T21:29:40Z) - Understanding Inverse Reinforcement Learning under Overparameterization: Non-Asymptotic Analysis and Global Optimality [52.906438147288256]
We show that our algorithm can identify the globally optimal reward and policy under certain neural network structures.
This is the first IRL algorithm with a non-asymptotic convergence guarantee that provably achieves global optimality.
arXiv Detail & Related papers (2025-03-22T21:16:08Z) - A Subsampling Based Neural Network for Spatial Data [0.0]
This article proposes a consistent localized two-layer deep neural network-based regression for spatial data.
We empirically observe the rate of convergence of discrepancy measures between the empirical probability distribution of observed and predicted data, which will become faster for a less smooth spatial surface.
This application is an effective showcase of non-linear spatial regression.
arXiv Detail & Related papers (2024-11-06T02:37:43Z) - Improving Generalization of Deep Neural Networks by Optimum Shifting [33.092571599896814]
We propose a novel method called emphoptimum shifting, which changes the parameters of a neural network from a sharp minimum to a flatter one.
Our method is based on the observation that when the input and output of a neural network are fixed, the matrix multiplications within the network can be treated as systems of under-determined linear equations.
arXiv Detail & Related papers (2024-05-23T02:31:55Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - What Kinds of Functions do Deep Neural Networks Learn? Insights from
Variational Spline Theory [19.216784367141972]
We develop a variational framework to understand the properties of functions learned by deep neural networks with ReLU activation functions fit to data.
We derive a representer theorem showing that deep ReLU networks are solutions to regularized data fitting problems in this function space.
arXiv Detail & Related papers (2021-05-07T16:18:22Z) - Quantum Optimization for Training Quantum Neural Networks [16.780058676633914]
We devise a framework for leveraging quantum optimisation algorithms to find optimal parameters of QNNs for certain tasks.
We coherently encode the cost function of QNNs onto relative phases of a superposition state in the Hilbert space of the network parameters.
arXiv Detail & Related papers (2021-03-31T13:06:30Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural
Networks: an Exact Characterization of the Optimal Solutions [51.60996023961886]
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints.
Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces.
arXiv Detail & Related papers (2020-06-10T15:38:30Z) - Local Propagation in Constraint-based Neural Network [77.37829055999238]
We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
arXiv Detail & Related papers (2020-02-18T16:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.