A semigroup method for high dimensional committor functions based on
neural network
- URL: http://arxiv.org/abs/2012.06727v3
- Date: Wed, 5 May 2021 04:35:37 GMT
- Title: A semigroup method for high dimensional committor functions based on
neural network
- Authors: Haoya Li, Yuehaw Khoo, Yinuo Ren, Lexing Ying
- Abstract summary: Instead of working with partial differential equations, the new method works with an integral formulation based on the semigroup of the differential operator.
gradient descent type algorithms can be applied in the training of the committor function without the need of computing any mixed second-order derivatives.
Unlike the previous methods that enforce the boundary conditions through penalty terms, the new method takes into account the boundary conditions automatically.
- Score: 1.7205106391379026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a new method based on neural networks for computing the
high-dimensional committor functions that satisfy Fokker-Planck equations.
Instead of working with partial differential equations, the new method works
with an integral formulation based on the semigroup of the differential
operator. The variational form of the new formulation is then solved by
parameterizing the committor function as a neural network. There are two major
benefits of this new approach. First, stochastic gradient descent type
algorithms can be applied in the training of the committor function without the
need of computing any mixed second-order derivatives. Moreover, unlike the
previous methods that enforce the boundary conditions through penalty terms,
the new method takes into account the boundary conditions automatically.
Numerical results are provided to demonstrate the performance of the proposed
method.
Related papers
- A Natural Primal-Dual Hybrid Gradient Method for Adversarial Neural Network Training on Solving Partial Differential Equations [9.588717577573684]
We propose a scalable preconditioned primal hybrid gradient algorithm for solving partial differential equations (PDEs)
We compare the performance of the proposed method with several commonly used deep learning algorithms.
The numerical results suggest that the proposed method performs efficiently and robustly and converges more stably.
arXiv Detail & Related papers (2024-11-09T20:39:10Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Mapping-to-Parameter Nonlinear Functional Regression with Novel B-spline
Free Knot Placement Algorithm [12.491024918270824]
We propose a novel approach to nonlinear functional regression.
The model is based on the mapping of function data from an infinite-dimensional function space to a finite-dimensional parameter space.
The performance of our knot placement algorithms is shown to be robust in both single-function approximation and multiple-function approximation contexts.
arXiv Detail & Related papers (2024-01-26T16:35:48Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Galerkin Neural Networks: A Framework for Approximating Variational
Equations with Error Control [0.0]
We present a new approach to using neural networks to approximate the solutions of variational equations.
We use a sequence of finite-dimensional subspaces whose basis functions are realizations of a sequence of neural networks.
arXiv Detail & Related papers (2021-05-28T20:25:40Z) - A semigroup method for high dimensional elliptic PDEs and eigenvalue
problems based on neural networks [1.52292571922932]
We propose a semigroup computation method for solving high-dimensional elliptic partial differential equations (PDEs) and the associated eigenvalue problems based on neural networks.
For the PDE problems, we reformulate the original equations as variational problems with the help of semigroup operators and then solve the variational problems with neural network (NN) parameterization.
For eigenvalue problems, a primal-dual method is proposed, resolving the constraint with a scalar dual variable.
arXiv Detail & Related papers (2021-05-07T19:49:06Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Solving high-dimensional eigenvalue problems using deep neural networks:
A diffusion Monte Carlo like approach [14.558626910178127]
The eigenvalue problem is reformulated as a fixed point problem of the semigroup flow induced by the operator.
The method shares a similar spirit with diffusion Monte Carlo but augments a direct approximation to the eigenfunction through neural-network ansatz.
Our approach is able to provide accurate eigenvalue and eigenfunction approximations in several numerical examples.
arXiv Detail & Related papers (2020-02-07T03:08:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.