Characterizing possible failure modes in physics-informed neural
networks
- URL: http://arxiv.org/abs/2109.01050v1
- Date: Thu, 2 Sep 2021 16:06:45 GMT
- Title: Characterizing possible failure modes in physics-informed neural
networks
- Authors: Aditi S. Krishnapriyan, Amir Gholami, Shandian Zhe, Robert M. Kirby,
Michael W. Mahoney
- Abstract summary: Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
- Score: 55.83255669840384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work in scientific machine learning has developed so-called
physics-informed neural network (PINN) models. The typical approach is to
incorporate physical domain knowledge as soft constraints on an empirical loss
function and use existing machine learning methodologies to train the model. We
demonstrate that, while existing PINN methodologies can learn good models for
relatively trivial problems, they can easily fail to learn relevant physical
phenomena even for simple PDEs. In particular, we analyze several distinct
situations of widespread physical interest, including learning differential
equations with convection, reaction, and diffusion operators. We provide
evidence that the soft regularization in PINNs, which involves differential
operators, can introduce a number of subtle problems, including making the
problem ill-conditioned. Importantly, we show that these possible failure modes
are not due to the lack of expressivity in the NN architecture, but that the
PINN's setup makes the loss landscape very hard to optimize. We then describe
two promising solutions to address these failure modes. The first approach is
to use curriculum regularization, where the PINN's loss term starts from a
simple PDE regularization, and becomes progressively more complex as the NN
gets trained. The second approach is to pose the problem as a
sequence-to-sequence learning task, rather than learning to predict the entire
space-time at once. Extensive testing shows that we can achieve up to 1-2
orders of magnitude lower error with these methods as compared to regular PINN
training.
Related papers
- Physics-Informed Neural Networks with Trust-Region Sequential Quadratic Programming [4.557963624437784]
Recent research has noted that Physics-Informed Neural Networks (PINNs) may fail to learn relatively complex Partial Differential Equations (PDEs)
This paper addresses the failure modes of PINNs by introducing a novel, hard-constrained deep learning method -- trust-region Sequential Quadratic Programming (trSQP-PINN)
In contrast to directly training the penalized soft-constrained loss as in PINNs, our method performs a linear-quadratic approximation of the hard-constrained loss, while leveraging the soft-constrained loss to adaptively adjust the trust-region radius.
arXiv Detail & Related papers (2024-09-16T23:22:12Z) - Improving PINNs By Algebraic Inclusion of Boundary and Initial Conditions [0.1874930567916036]
"AI for Science" aims to solve fundamental scientific problems using AI techniques.
In this work we explore the possibility of changing the model being trained from being just a neural network to being a non-linear transformation of it.
This reduces the number of terms in the loss function than the standard PINN losses.
arXiv Detail & Related papers (2024-07-30T11:19:48Z) - Exact Enforcement of Temporal Continuity in Sequential Physics-Informed
Neural Networks [0.0]
We introduce a method to enforce continuity between successive time segments via a solution ansatz.
The method is tested for a number of benchmark problems involving both linear and non-linear PDEs.
The numerical experiments conducted with the proposed method demonstrated superior convergence and accuracy over both traditional PINNs and the soft-constrained counterparts.
arXiv Detail & Related papers (2024-02-15T17:41:02Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - iPINNs: Incremental learning for Physics-informed neural networks [66.4795381419701]
Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs)
We propose incremental PINNs that can learn multiple tasks sequentially without additional parameters for new tasks and improve performance for every equation in the sequence.
Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learnedworks.
arXiv Detail & Related papers (2023-04-10T20:19:20Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Scientific Machine Learning through Physics-Informed Neural Networks:
Where we are and What's next [5.956366179544257]
Physic-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations.
PINNs are nowadays used to solve PDEs, fractional equations, and integral-differential equations.
arXiv Detail & Related papers (2022-01-14T19:05:44Z) - Learning in Sinusoidal Spaces with Physics-Informed Neural Networks [22.47355575565345]
A physics-informed neural network (PINN) uses physics-augmented loss functions to ensure its output is consistent with fundamental physics laws.
It turns out to be difficult to train an accurate PINN model for many problems in practice.
arXiv Detail & Related papers (2021-09-20T07:42:41Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.