Neural tangent kernel analysis of PINN for advection-diffusion equation
- URL: http://arxiv.org/abs/2211.11716v1
- Date: Mon, 21 Nov 2022 18:35:14 GMT
- Title: Neural tangent kernel analysis of PINN for advection-diffusion equation
- Authors: M. H. Saadat, B. Gjorgiev, L. Das and G. Sansavini
- Abstract summary: Physics-informed neural networks (PINNs) numerically approximate the solution of a partial differential equation (PDE)
PINNs are known to struggle even in simple cases where the closed-form analytical solution is available.
This work focuses on a systematic analysis of PINNs for the linear advection-diffusion equation (LAD) using the Neural Tangent Kernel (NTK) theory.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed neural networks (PINNs) numerically approximate the solution
of a partial differential equation (PDE) by incorporating the residual of the
PDE along with its initial/boundary conditions into the loss function. In spite
of their partial success, PINNs are known to struggle even in simple cases
where the closed-form analytical solution is available. In order to better
understand the learning mechanism of PINNs, this work focuses on a systematic
analysis of PINNs for the linear advection-diffusion equation (LAD) using the
Neural Tangent Kernel (NTK) theory. Thanks to the NTK analysis, the effects of
the advection speed/diffusion parameter on the training dynamics of PINNs are
studied and clarified. We show that the training difficulty of PINNs is a
result of 1) the so-called spectral bias, which leads to difficulty in learning
high-frequency behaviours; and 2) convergence rate disparity between different
loss components that results in training failure. The latter occurs even in the
cases where the solution of the underlying PDE does not exhibit high-frequency
behaviour. Furthermore, we observe that this training difficulty manifests
itself, to some extent, differently in advection-dominated and
diffusion-dominated regimes. Different strategies to address these issues are
also discussed. In particular, it is demonstrated that periodic activation
functions can be used to partly resolve the spectral bias issue.
Related papers
- Domain decomposition-based coupling of physics-informed neural networks
via the Schwarz alternating method [0.0]
Physics-informed neural networks (PINNs) are appealing data-driven tools for solving and inferring solutions to nonlinear partial differential equations (PDEs)
This paper explores the use of the Schwarz alternating method as a means to couple PINNs with each other and with conventional numerical models.
arXiv Detail & Related papers (2023-11-01T01:59:28Z) - Error Analysis of Physics-Informed Neural Networks for Approximating
Dynamic PDEs of Second Order in Time [1.123111111659464]
We consider the approximation of a class of dynamic partial differential equations (PDE) of second order in time by the physics-informed neural network (PINN) approach.
Our analyses show that, with feed-forward neural networks having two hidden layers and the $tanh$ activation function, the PINN approximation errors for the solution field can be effectively bounded by the training loss and the number of training data points.
We present ample numerical experiments with the new PINN algorithm for the wave equation, the Sine-Gordon equation and the linear elastodynamic equation, which show that the method can capture
arXiv Detail & Related papers (2023-03-22T00:51:11Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - Investigations on convergence behaviour of Physics Informed Neural
Networks across spectral ranges and derivative orders [0.0]
An important inference from Neural Kernel Tangent (NTK) theory is the existence of spectral bias (SB)
SB is low frequency components of the target function of a fully connected Artificial Neural Network (ANN) being learnt significantly faster than the higher frequencies during training.
This is established for Mean Square Error (MSE) loss functions with very low learning rate parameters.
It is firmly established that under normalized conditions, PINNs do exhibit strong spectral bias, and this increases with the order of the differential equation.
arXiv Detail & Related papers (2023-01-07T06:31:28Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Semi-analytic PINN methods for singularly perturbed boundary value
problems [0.8594140167290099]
We propose a new semi-analytic physics informed neural network (PINN) to solve singularly perturbed boundary value problems.
The PINN is a scientific machine learning framework that offers a promising perspective for finding numerical solutions to partial differential equations.
arXiv Detail & Related papers (2022-08-19T04:26:40Z) - Physics-Aware Neural Networks for Boundary Layer Linear Problems [0.0]
Physics-Informed Neural Networks (PINNs) approximate the solution of general partial differential equations (PDEs) by adding them in some form as terms of the loss/cost of a Neural Network.
This paper explores PINNs for linear PDEs whose solutions may present one or more boundary layers.
arXiv Detail & Related papers (2022-07-15T21:15:06Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.