A Derivative-Free Method for Solving Elliptic Partial Differential
Equations with Deep Neural Networks
- URL: http://arxiv.org/abs/2001.06145v1
- Date: Fri, 17 Jan 2020 03:29:24 GMT
- Title: A Derivative-Free Method for Solving Elliptic Partial Differential
Equations with Deep Neural Networks
- Authors: Jihun Han, Mihai Nica, Adam R Stinchcombe
- Abstract summary: We introduce a deep neural network based method for solving a class of elliptic partial differential equations.
We approximate the solution of the PDE with a deep neural network which is trained under the guidance of a probabilistic representation of the PDE.
As Brownian walkers explore the domain, the deep neural network is iteratively trained using a form of reinforcement learning.
- Score: 2.578242050187029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a deep neural network based method for solving a class of
elliptic partial differential equations. We approximate the solution of the PDE
with a deep neural network which is trained under the guidance of a
probabilistic representation of the PDE in the spirit of the Feynman-Kac
formula. The solution is given by an expectation of a martingale process driven
by a Brownian motion. As Brownian walkers explore the domain, the deep neural
network is iteratively trained using a form of reinforcement learning. Our
method is a 'Derivative-Free Loss Method' since it does not require the
explicit calculation of the derivatives of the neural network with respect to
the input neurons in order to compute the training loss. The advantages of our
method are showcased in a series of test problems: a corner singularity
problem, an interface problem, and an application to a chemotaxis population
model.
Related papers
- Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Newton Informed Neural Operator for Computing Multiple Solutions of Nonlinear Partials Differential Equations [3.8916312075738273]
We propose a novel approach called the Newton Informed Neural Operator to tackle nonlinearities.
Our method combines classical Newton methods, addressing well-posed problems, and efficiently learns multiple solutions in a single learning process.
arXiv Detail & Related papers (2024-05-23T01:52:54Z) - Efficient physics-informed neural networks using hash encoding [0.0]
Physics-informed neural networks (PINNs) have attracted a lot of attention in scientific computing.
We propose to incorporate multi-resolution hash encoding into PINNs to improve the training efficiency.
We test the proposed method on three problems, including Burgers equation, Helmholtz equation, and Navier-Stokes equation.
arXiv Detail & Related papers (2023-02-26T20:00:23Z) - Physics-informed Neural Networks approach to solve the Blasius function [0.0]
This paper presents a physics-informed neural network (PINN) approach to solve the Blasius function.
It is seen that this method produces results that are at par with the numerical and conventional methods.
arXiv Detail & Related papers (2022-12-31T03:14:42Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Unsupervised Learning of Solutions to Differential Equations with
Generative Adversarial Networks [1.1470070927586016]
We develop a novel method for solving differential equations with unsupervised neural networks.
We show that our method, which we call Differential Equation GAN (DEQGAN), can obtain multiple orders of magnitude lower mean squared errors.
arXiv Detail & Related papers (2020-07-21T23:36:36Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.