Deep Operator BSDE: a Numerical Scheme to Approximate the Solution Operators
- URL: http://arxiv.org/abs/2412.03405v1
- Date: Wed, 04 Dec 2024 15:36:20 GMT
- Title: Deep Operator BSDE: a Numerical Scheme to Approximate the Solution Operators
- Authors: Giulia Di Nunno, Pere Díaz Lozano,
- Abstract summary: We propose a numerical method to approximate the solution operator given by a Backward Differential Equation (BSDE)
The main ingredients for this are the Wiener chaos decomposition and the classical Euler scheme for BSDEs.
We show convergence of this scheme under very mild assumptions, and provide a rate of convergence in more restrictive cases.
- Score: 0.0
- License:
- Abstract: Motivated by dynamic risk measures and conditional $g$-expectations, in this work we propose a numerical method to approximate the solution operator given by a Backward Stochastic Differential Equation (BSDE). The main ingredients for this are the Wiener chaos decomposition and the classical Euler scheme for BSDEs. We show convergence of this scheme under very mild assumptions, and provide a rate of convergence in more restrictive cases. We then implement it using neural networks, and we present several numerical examples where we can check the accuracy of the method.
Related papers
- Deep Operator Networks for Bayesian Parameter Estimation in PDEs [0.0]
We present a novel framework combining Deep Operator Networks (DeepONets) with Physics-Informed Neural Networks (PINNs) to solve partial differential equations (PDEs)
By integrating data-driven learning with physical constraints, our method achieves robust and accurate solutions across diverse scenarios.
arXiv Detail & Related papers (2025-01-18T07:41:05Z) - Closure Discovery for Coarse-Grained Partial Differential Equations Using Grid-based Reinforcement Learning [2.9611509639584304]
We propose a systematic approach for identifying closures in under-resolved PDEs using grid-based Reinforcement Learning.
We demonstrate the capabilities and limitations of our framework through numerical solutions of the advection equation and the Burgers' equation.
arXiv Detail & Related papers (2024-02-01T19:41:04Z) - Implementation and (Inverse Modified) Error Analysis for
implicitly-templated ODE-nets [0.0]
We focus on learning unknown dynamics from data using ODE-nets templated on implicit numerical initial value problem solvers.
We perform Inverse Modified error analysis of the ODE-nets using unrolled implicit schemes for ease of interpretation.
We formulate an adaptive algorithm which monitors the level of error and adapts the number of (unrolled) implicit solution iterations.
arXiv Detail & Related papers (2023-03-31T06:47:02Z) - Out-of-distributional risk bounds for neural operators with applications
to the Helmholtz equation [6.296104145657063]
Existing neural operators (NOs) do not necessarily perform well for all physics problems.
We propose a subfamily of NOs enabling an enhanced empirical approximation of the nonlinear operator mapping wave speed to solution.
Our experiments reveal certain surprises in the generalization and the relevance of introducing depth.
We conclude by proposing a hypernetwork version of the subfamily of NOs as a surrogate model for the mentioned forward operator.
arXiv Detail & Related papers (2023-01-27T03:02:12Z) - Online Multi-Agent Decentralized Byzantine-robust Gradient Estimation [62.997667081978825]
Our algorithm is based on simultaneous perturbation, secure state estimation and two-timescale approximations.
We also show the performance of our algorithm through numerical experiments.
arXiv Detail & Related papers (2022-09-30T07:29:49Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Risk and optimal policies in bandit experiments [0.0]
This paper provides a decision theoretic analysis of bandit experiments.
The bandit setting corresponds to a dynamic programming problem, but solving this directly is typically infeasible.
For normally distributed rewards, the minimal Bayes risk can be characterized as the solution to a nonlinear second-order partial differential equation.
arXiv Detail & Related papers (2021-12-13T00:41:19Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.