A deep solver for backward stochastic Volterra integral equations
- URL: http://arxiv.org/abs/2505.18297v2
- Date: Wed, 02 Jul 2025 07:12:03 GMT
- Title: A deep solver for backward stochastic Volterra integral equations
- Authors: Kristoffer Andersson, Alessandro Gnoatto, Camilo Andrés García Trillos,
- Abstract summary: We present the first deep-learning solver for backward Volterra integral equations (BSVIEs)<n>The method trains a neural network to approximate the two solution fields in a single stage.<n>These results open practical access to a family of high-dimensional, path-dependent problems in control and quantitative finance.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present the first deep-learning solver for backward stochastic Volterra integral equations (BSVIEs) and their fully-coupled forward-backward variants. The method trains a neural network to approximate the two solution fields in a single stage, avoiding the use of nested time-stepping cycles that limit classical algorithms. For the decoupled case we prove a non-asymptotic error bound composed of an a posteriori residual plus the familiar square root dependence on the time step. Numerical experiments confirm this rate and reveal two key properties: \emph{scalability}, in the sense that accuracy remains stable from low dimension up to 500 spatial variables while GPU batching keeps wall-clock time nearly constant; and \emph{generality}, since the same method handles coupled systems whose forward dynamics depend on the backward solution. These results open practical access to a family of high-dimensional, path-dependent problems in stochastic control and quantitative finance.
Related papers
- Physics-Informed Time-Integrated DeepONet: Temporal Tangent Space Operator Learning for High-Accuracy Inference [0.0]
We introduce a dual-dimensional architecture trained via fully physics-informed or hybrid physics- and data-driven objectives.<n>Instead of forecasting future states, the network learns the time-derivative operator from the current state, integrating it using classical time-stepping schemes.<n>Applying to benchmark problems, PITI-DeepONet shows improved accuracy over extended time horizons when compared to traditional methods.
arXiv Detail & Related papers (2025-08-07T09:25:52Z) - Harmonic Path Integral Diffusion [0.4527270266697462]
We present a novel approach for sampling from a continuous multivariate probability distribution, which may either be explicitly known (up to a normalization factor) or represented via empirical samples.
Our method constructs a time-dependent bridge from a delta function centered at the origin of the state space at $t=0$, transforming it into the target distribution at $t=1$.
We contrast these algorithms with other sampling methods, particularly simulated and path integral sampling, highlighting their advantages in terms of analytical control, accuracy, and computational efficiency.
arXiv Detail & Related papers (2024-09-23T16:20:21Z) - A High Order Solver for Signature Kernels [5.899263357689845]
Signature kernels are at the core of several machine learning algorithms for analysing time series.
We introduce new algorithms for the numerical approximation of signature kernels.
arXiv Detail & Related papers (2024-04-01T23:09:52Z) - Drift Identification for L\'{e}vy alpha-Stable Stochastic Systems [2.28438857884398]
Given time series observations of a differential equation, estimate the SDE's drift field.
For $alpha$ in the interval $[1,2)$, the noise is heavy-tailed.
We propose a space approach that centers on computing time-dependent characteristic functions.
arXiv Detail & Related papers (2022-12-06T20:40:27Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - On optimization of coherent and incoherent controls for two-level
quantum systems [77.34726150561087]
This article considers some control problems for closed and open two-level quantum systems.
The closed system's dynamics is governed by the Schr"odinger equation with coherent control.
The open system's dynamics is governed by the Gorini-Kossakowski-Sudarshan-Lindblad master equation.
arXiv Detail & Related papers (2022-05-05T09:08:03Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - A Two-Time-Scale Stochastic Optimization Framework with Applications in Control and Reinforcement Learning [13.908826484332282]
We study a new two-time-scale gradient method for solving optimization problems.
Our first contribution is to characterize the finite-time complexity of the proposed two-time-scale gradient algorithm.
We apply our framework to gradient-based policy evaluation algorithms in reinforcement learning.
arXiv Detail & Related papers (2021-09-29T23:15:23Z) - Deep learning algorithms for solving high dimensional nonlinear backward
stochastic differential equations [1.8655840060559168]
We propose a new deep learning-based scheme for solving high dimensional nonlinear backward differential equations (BSDEs)
We approximate the unknown solution of a BSDE using a deep neural network and its gradient with automatic differentiation.
In order to demonstrate performances of our algorithm, several nonlinear BSDEs including pricing problems in finance are provided.
arXiv Detail & Related papers (2020-10-03T10:18:58Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - A high-order integral equation-based solver for the time-dependent
Schrodinger equation [0.0]
We introduce a numerical method for the solution of the time-dependent Schrodinger equation with a smooth potential.
A spatially uniform electric field may be included, making the solver applicable to simulations of light-matter interaction.
arXiv Detail & Related papers (2020-01-16T23:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.