CAN-PINN: A Fast Physics-Informed Neural Network Based on
Coupled-Automatic-Numerical Differentiation Method
- URL: http://arxiv.org/abs/2110.15832v1
- Date: Fri, 29 Oct 2021 14:52:46 GMT
- Title: CAN-PINN: A Fast Physics-Informed Neural Network Based on
Coupled-Automatic-Numerical Differentiation Method
- Authors: Pao-Hsiung Chiu, Jian Cheng Wong, Chinchun Ooi, My Ha Dao, Yew-Soon
Ong
- Abstract summary: Novel physics-informed neural network (PINN) methods for coupling neighboring support points and automatic differentiation (AD) through Taylor series expansion are proposed.
The proposed coupled-automatic-numerical differentiation framework, labeled as can-PINN, unifies the advantages of AD and ND, providing more robust and efficient training than AD-based PINNs.
- Score: 17.04611875126544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, novel physics-informed neural network (PINN) methods for
coupling neighboring support points and automatic differentiation (AD) through
Taylor series expansion are proposed to allow efficient training with improved
accuracy. The computation of differential operators required for PINNs loss
evaluation at collocation points are conventionally obtained via AD. Although
AD has the advantage of being able to compute the exact gradients at any point,
such PINNs can only achieve high accuracies with large numbers of collocation
points, otherwise they are prone to optimizing towards unphysical solution. To
make PINN training fast, the dual ideas of using numerical differentiation
(ND)-inspired method and coupling it with AD are employed to define the loss
function. The ND-based formulation for training loss can strongly link
neighboring collocation points to enable efficient training in sparse sample
regimes, but its accuracy is restricted by the interpolation scheme. The
proposed coupled-automatic-numerical differentiation framework, labeled as
can-PINN, unifies the advantages of AD and ND, providing more robust and
efficient training than AD-based PINNs, while further improving accuracy by up
to 1-2 orders of magnitude relative to ND-based PINNs. For a proof-of-concept
demonstration of this can-scheme to fluid dynamic problems, two
numerical-inspired instantiations of can-PINN schemes for the convection and
pressure gradient terms were derived to solve the incompressible Navier-Stokes
(N-S) equations. The superior performance of can-PINNs is demonstrated on
several challenging problems, including the flow mixing phenomena, lid driven
flow in a cavity, and channel flow over a backward facing step. The results
reveal that for challenging problems like these, can-PINNs can consistently
achieve very good accuracy whereas conventional AD-based PINNs fail.
Related papers
- Dual Cone Gradient Descent for Training Physics-Informed Neural Networks [0.0]
Physics-informed dual neural networks (PINNs) have emerged as a prominent approach for solving partial differential equations.
We propose a novel framework, Dual Cone Gradient Descent (DCGD), which adjusts the direction of the updated gradient to ensure it falls within a cone region.
arXiv Detail & Related papers (2024-09-27T03:27:46Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Domain decomposition-based coupling of physics-informed neural networks
via the Schwarz alternating method [0.0]
Physics-informed neural networks (PINNs) are appealing data-driven tools for solving and inferring solutions to nonlinear partial differential equations (PDEs)
This paper explores the use of the Schwarz alternating method as a means to couple PINNs with each other and with conventional numerical models.
arXiv Detail & Related papers (2023-11-01T01:59:28Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Failure-informed adaptive sampling for PINNs [5.723850818203907]
Physics-informed neural networks (PINNs) have emerged as an effective technique for solving PDEs in a wide range of domains.
Recent research has demonstrated, however, that the performance of PINNs can vary dramatically with different sampling procedures.
We present an adaptive approach termed failure-informed PINNs, which is inspired by the viewpoint of reliability analysis.
arXiv Detail & Related papers (2022-10-01T13:34:41Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Multi-Objective Loss Balancing for Physics-Informed Deep Learning [0.0]
We observe the role of correctly weighting the combination of multiple competitive loss functions for training PINNs effectively.
We propose a novel self-adaptive loss balancing of PINNs called ReLoBRaLo.
Our simulation studies show that ReLoBRaLo training is much faster and achieves higher accuracy than training PINNs with other balancing methods.
arXiv Detail & Related papers (2021-10-19T09:00:12Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.