dNNsolve: an efficient NN-based PDE solver
- URL: http://arxiv.org/abs/2103.08662v1
- Date: Mon, 15 Mar 2021 19:14:41 GMT
- Title: dNNsolve: an efficient NN-based PDE solver
- Authors: Veronica Guidetti, Francesco Muia, Yvette Welling and Alexander
Westphal
- Abstract summary: We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Networks (NNs) can be used to solve Ordinary and Partial Differential
Equations (ODEs and PDEs) by redefining the question as an optimization
problem. The objective function to be optimized is the sum of the squares of
the PDE to be solved and of the initial/boundary conditions. A feed forward NN
is trained to minimise this loss function evaluated on a set of collocation
points sampled from the domain where the problem is defined. A compact and
smooth solution, that only depends on the weights of the trained NN, is then
obtained. This approach is often referred to as PINN, from Physics Informed
Neural Network~\cite{raissi2017physics_1, raissi2017physics_2}. Despite the
success of the PINN approach in solving various classes of PDEs, an
implementation of this idea that is capable of solving a large class of ODEs
and PDEs with good accuracy and without the need to finely tune the
hyperparameters of the network, is not available yet. In this paper, we
introduce a new implementation of this concept - called dNNsolve - that makes
use of dual Neural Networks to solve ODEs/PDEs. These include: i) sine and
sigmoidal activation functions, that provide a more efficient basis to capture
both secular and periodic patterns in the solutions; ii) a newly designed
architecture, that makes it easy for the the NN to approximate the solution
using the basis functions mentioned above. We show that dNNsolve is capable of
solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions, without
the need of hyperparameter fine-tuning.
Related papers
- Correctness Verification of Neural Networks Approximating Differential
Equations [0.0]
Neural Networks (NNs) approximate the solution of Partial Differential Equations (PDEs)
NNs can become integral parts of simulation software tools which can accelerate the simulation of complex dynamic systems more than 100 times.
This work addresses the verification of these functions by defining the NN derivative as a finite difference approximation.
For the first time, we tackle the problem of bounding an NN function without a priori knowledge of the output domain.
arXiv Detail & Related papers (2024-02-12T12:55:35Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Characteristics-Informed Neural Networks for Forward and Inverse
Hyperbolic Problems [0.0]
We propose characteristic-informed neural networks (CINN) for solving forward and inverse problems involving hyperbolic PDEs.
CINN encodes the characteristics of the PDE in a general-purpose deep neural network trained with the usual MSE data-fitting regression loss.
Preliminary results indicate that CINN is able to improve on the accuracy of the baseline PINN, while being nearly twice as fast to train and avoiding non-physical solutions.
arXiv Detail & Related papers (2022-12-28T18:38:53Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Bayesian neural networks for weak solution of PDEs with uncertainty
quantification [3.4773470589069473]
A new physics-constrained neural network (NN) approach is proposed to solve PDEs without labels.
We write the loss function of NNs based on the discretized residual of PDEs through an efficient, convolutional operator-based, and vectorized implementation.
We demonstrate the capability and performance of the proposed framework by applying it to steady-state diffusion, linear elasticity, and nonlinear elasticity.
arXiv Detail & Related papers (2021-01-13T04:57:51Z) - Two-Layer Neural Networks for Partial Differential Equations:
Optimization and Generalization Theory [4.243322291023028]
We show that the gradient descent method can identify a global minimizer of the least-squares optimization for solving second-order linear PDEs.
We also analyze the generalization error of the least-squares optimization for second-order linear PDEs and two-layer neural networks.
arXiv Detail & Related papers (2020-06-28T22:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.