Beyond Derivative Pathology of PINNs: Variable Splitting Strategy with Convergence Analysis
- URL: http://arxiv.org/abs/2409.20383v1
- Date: Mon, 30 Sep 2024 15:20:10 GMT
- Title: Beyond Derivative Pathology of PINNs: Variable Splitting Strategy with Convergence Analysis
- Authors: Yesom Park, Changhoon Song, Myungjoo Kang,
- Abstract summary: Physics-informed neural networks (PINNs) have emerged as effective methods for solving partial differential equations (PDEs) in various problems.
In this study, we prove that PINNs encounter a fundamental issue that the premise is invalid.
We propose a textitvariable splitting strategy that addresses this issue by parameterizing the gradient of the solution as an auxiliary variable.
- Score: 6.468495781611434
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-informed neural networks (PINNs) have recently emerged as effective methods for solving partial differential equations (PDEs) in various problems. Substantial research focuses on the failure modes of PINNs due to their frequent inaccuracies in predictions. However, most are based on the premise that minimizing the loss function to zero causes the network to converge to a solution of the governing PDE. In this study, we prove that PINNs encounter a fundamental issue that the premise is invalid. We also reveal that this issue stems from the inability to regulate the behavior of the derivatives of the predicted solution. Inspired by the \textit{derivative pathology} of PINNs, we propose a \textit{variable splitting} strategy that addresses this issue by parameterizing the gradient of the solution as an auxiliary variable. We demonstrate that using the auxiliary variable eludes derivative pathology by enabling direct monitoring and regulation of the gradient of the predicted solution. Moreover, we prove that the proposed method guarantees convergence to a generalized solution for second-order linear PDEs, indicating its applicability to various problems.
Related papers
- A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Convergence analysis of unsupervised Legendre-Galerkin neural networks
for linear second-order elliptic PDEs [0.8594140167290099]
We perform the convergence analysis of unsupervised Legendre--Galerkin neural networks (ULGNet)
ULGNet is a deep-learning-based numerical method for solving partial differential equations (PDEs)
arXiv Detail & Related papers (2022-11-16T13:31:03Z) - Investigating and Mitigating Failure Modes in Physics-informed Neural
Networks (PINNs) [0.0]
This paper explores the difficulties in solving partial differential equations (PDEs) using physics-informed neural networks (PINNs)
PINNs use physics as a regularization term in objective function. However, this approach is impractical in the absence of data or prior knowledge of the solution.
Our findings demonstrate that high-order PDEs contaminate backpropagated gradients and hinder convergence.
arXiv Detail & Related papers (2022-09-20T20:46:07Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - Physics-Aware Neural Networks for Boundary Layer Linear Problems [0.0]
Physics-Informed Neural Networks (PINNs) approximate the solution of general partial differential equations (PDEs) by adding them in some form as terms of the loss/cost of a Neural Network.
This paper explores PINNs for linear PDEs whose solutions may present one or more boundary layers.
arXiv Detail & Related papers (2022-07-15T21:15:06Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Spectrally Adapted Physics-Informed Neural Networks for Solving
Unbounded Domain Problems [0.0]
In this work, we combine two classes of numerical methods: (i) physics-informed neural networks (PINNs) and (ii) adaptive spectral methods.
The numerical methods that we develop take advantage of the ability of physics-informed neural networks to easily implement high-order numerical schemes to efficiently solve PDEs.
We then show how recently introduced adaptive techniques for spectral methods can be integrated into PINN-based PDE solvers to obtain numerical solutions of unbounded domain problems that cannot be efficiently approximated by standard PINNs.
arXiv Detail & Related papers (2022-02-06T05:25:22Z) - DiffNet: Neural Field Solutions of Parametric Partial Differential
Equations [30.80582606420882]
We consider a mesh-based approach for training a neural network to produce field predictions of solutions to PDEs.
We use a weighted Galerkin loss function based on the Finite Element Method (FEM) on a parametric elliptic PDE.
We prove theoretically, and illustrate with experiments, convergence results analogous to mesh convergence analysis deployed in finite element solutions to PDEs.
arXiv Detail & Related papers (2021-10-04T17:59:18Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.