Physical Activation Functions (PAFs): An Approach for More Efficient
Induction of Physics into Physics-Informed Neural Networks (PINNs)
- URL: http://arxiv.org/abs/2205.14630v2
- Date: Sat, 17 Jun 2023 07:42:30 GMT
- Title: Physical Activation Functions (PAFs): An Approach for More Efficient
Induction of Physics into Physics-Informed Neural Networks (PINNs)
- Authors: Jassem Abbasi (1), P{\aa}l {\O}steb{\o} Andersen (1) ((1) University
of Stavanger)
- Abstract summary: Physical Activation Functions (PAFs) help to generate Physics-Informed Neural Networks (PINNs) with less complexity and much more validity for longer ranges of prediction.
PAFs can be inspired by any mathematical formula related to the investigating phenomena such as the initial or boundary conditions of the PDE system.
It is concluded that using the PAFs helps in generating PINNs with less complexity and much more validity for longer ranges of prediction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the gap between Deep Learning (DL) methods and analytical or
numerical approaches in scientific computing is tried to be filled by the
evolution of Physics-Informed Neural Networks (PINNs). However, still, there
are many complications in the training of PINNs and optimal interleaving of
physical models. Here, we introduced the concept of Physical Activation
Functions (PAFs). This concept offers that instead of using general activation
functions (AFs) such as ReLU, tanh, and sigmoid for all the neurons, one can
use generic AFs that their mathematical expression is inherited from the
physical laws of the investigating phenomena. The formula of PAFs may be
inspired by the terms in the analytical solution of the problem. We showed that
the PAFs can be inspired by any mathematical formula related to the
investigating phenomena such as the initial or boundary conditions of the PDE
system. We validated the advantages of PAFs for several PDEs including the
harmonic oscillations, Burgers, Advection-Convection equation, and the
heterogeneous diffusion equations. The main advantage of PAFs was in the more
efficient constraining and interleaving of PINNs with the investigating
physical phenomena and their underlying mathematical models. This added
constraint significantly improved the predictions of PINNs for the testing data
that was out-of-training distribution. Furthermore, the application of PAFs
reduced the size of the PINNs up to 75% in different cases. Also, the value of
loss terms was reduced by 1 to 2 orders of magnitude in some cases which is
noteworthy for upgrading the training of the PINNs. The iterations required for
finding the optimum values were also significantly reduced. It is concluded
that using the PAFs helps in generating PINNs with less complexity and much
more validity for longer ranges of prediction.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Understanding and Mitigating Extrapolation Failures in Physics-Informed
Neural Networks [1.1510009152620668]
We study the extrapolation behavior of PINNs on a representative set of PDEs of different types.
We find that failure to extrapolate is not caused by high frequencies in the solution function, but rather by shifts in the support of the Fourier spectrum over time.
arXiv Detail & Related papers (2023-06-15T20:08:42Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - RBF-MGN:Solving spatiotemporal PDEs with Physics-informed Graph Neural
Network [4.425915683879297]
We propose a novel framework based on graph neural networks (GNNs) and radial basis function finite difference (RBF-FD)
RBF-FD is used to construct a high-precision difference format of the differential equations to guide model training.
We illustrate the generalizability, accuracy, and efficiency of the proposed algorithms on different PDE parameters.
arXiv Detail & Related papers (2022-12-06T10:08:02Z) - Replacing Automatic Differentiation by Sobolev Cubatures fastens Physics
Informed Neural Nets and strengthens their Approximation Power [0.6091702876917279]
We present a novel class of approximations for variational losses, being applicable for the training of physics-informed neural nets (PINNs)
The loss computation rests on an extension of Gauss-Legendre cubatures, we term Sobolev cubatures, replacing automatic differentiation (A.D.)
arXiv Detail & Related papers (2022-11-23T11:23:08Z) - Neural tangent kernel analysis of PINN for advection-diffusion equation [0.0]
Physics-informed neural networks (PINNs) numerically approximate the solution of a partial differential equation (PDE)
PINNs are known to struggle even in simple cases where the closed-form analytical solution is available.
This work focuses on a systematic analysis of PINNs for the linear advection-diffusion equation (LAD) using the Neural Tangent Kernel (NTK) theory.
arXiv Detail & Related papers (2022-11-21T18:35:14Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Physics-Informed Neural Network Method for Solving One-Dimensional
Advection Equation Using PyTorch [0.0]
PINNs approach allows training neural networks while respecting the PDEs as a strong constraint in the optimization.
In standard small-scale circulation simulations, it is shown that the conventional approach incorporates a pseudo diffusive effect that is almost as large as the effect of the turbulent diffusion model.
Of all the schemes tested, only the PINNs approximation accurately predicted the outcome.
arXiv Detail & Related papers (2021-03-15T05:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.