LSA-PINN: Linear Boundary Connectivity Loss for Solving PDEs on Complex
Geometry
- URL: http://arxiv.org/abs/2302.01518v1
- Date: Fri, 3 Feb 2023 03:26:08 GMT
- Title: LSA-PINN: Linear Boundary Connectivity Loss for Solving PDEs on Complex
Geometry
- Authors: Jian Cheng Wong, Pao-Hsiung Chiu, Chinchun Ooi, My Ha Dao, Yew-Soon
Ong
- Abstract summary: We present a novel loss formulation for efficient learning of complex dynamics from governing physics using physics-informed neural networks (PINNs)
In our experiments, existing versions of PINNs are seen to learn poorly in many problems, especially for complex geometries.
We propose a new Boundary Connectivity (BCXN) loss function which provides linear local structure approximation (LSA) to the gradient behaviors at the boundary for PINN.
- Score: 15.583172926806148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel loss formulation for efficient learning of complex
dynamics from governing physics, typically described by partial differential
equations (PDEs), using physics-informed neural networks (PINNs). In our
experiments, existing versions of PINNs are seen to learn poorly in many
problems, especially for complex geometries, as it becomes increasingly
difficult to establish appropriate sampling strategy at the near boundary
region. Overly dense sampling can adversely impede training convergence if the
local gradient behaviors are too complex to be adequately modelled by PINNs. On
the other hand, if the samples are too sparse, existing PINNs tend to overfit
the near boundary region, leading to incorrect solution. To prevent such
issues, we propose a new Boundary Connectivity (BCXN) loss function which
provides linear local structure approximation (LSA) to the gradient behaviors
at the boundary for PINN. Our BCXN-loss implicitly imposes local structure
during training, thus facilitating fast physics-informed learning across entire
problem domains with order of magnitude sparser training samples. This LSA-PINN
method shows a few orders of magnitude smaller errors than existing methods in
terms of the standard L2-norm metric, while using dramatically fewer training
samples and iterations. Our proposed LSA-PINN does not pose any requirement on
the differentiable property of the networks, and we demonstrate its benefits
and ease of implementation on both multi-layer perceptron and convolutional
neural network versions as commonly used in current PINN literature.
Related papers
- General-Kindred Physics-Informed Neural Network to the Solutions of Singularly Perturbed Differential Equations [11.121415128908566]
We propose the General-Kindred Physics-Informed Neural Network (GKPINN) for solving Singular Perturbation Differential Equations (SPDEs)
This approach utilizes prior knowledge of the boundary layer from the equation and establishes a novel network to assist PINN in approxing the boundary layer.
The research findings underscore the exceptional performance of our novel approach, GKPINN, which delivers a remarkable enhancement in reducing the $L$ error by two to four orders of magnitude compared to the established PINN methodology.
arXiv Detail & Related papers (2024-08-27T02:03:22Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Learning Traveling Solitary Waves Using Separable Gaussian Neural
Networks [0.9065034043031668]
We apply a machine-learning approach to learn traveling solitary waves across various families of partial differential equations (PDEs)
Our approach integrates a novel interpretable neural network (NN) architecture into the framework of Physics-Informed Neural Networks (PINNs)
arXiv Detail & Related papers (2024-03-07T20:16:18Z) - Adversarial Training for Physics-Informed Neural Networks [4.446564162927513]
We propose an adversarial training strategy for PINNs termed by AT-PINNs.
AT-PINNs enhance the robustness of PINNs by fine-tuning the model with adversarial samples.
We implement AT-PINNs to the elliptic equation with multi-scale coefficients, Poisson equation with multi-peak solutions, Burgers equation with sharp solutions and the Allen-Cahn equation.
arXiv Detail & Related papers (2023-10-18T08:28:43Z) - PINNsFormer: A Transformer-Based Framework For Physics-Informed Neural Networks [22.39904196850583]
Physics-Informed Neural Networks (PINNs) have emerged as a promising deep learning framework for approximating numerical solutions to partial differential equations (PDEs)
We introduce a novel Transformer-based framework, termed PINNsFormer, designed to address this limitation.
PINNsFormer achieves superior generalization ability and accuracy across various scenarios, including PINNs failure modes and high-dimensional PDEs.
arXiv Detail & Related papers (2023-07-21T18:06:27Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Enforcing Continuous Physical Symmetries in Deep Learning Network for
Solving Partial Differential Equations [3.6317085868198467]
We introduce a new method, symmetry-enhanced physics informed neural network (SPINN) where the invariant surface conditions induced by the Lie symmetries of PDEs are embedded into the loss function of PINN.
We show that SPINN performs better than PINN with fewer training points and simpler architecture of neural network.
arXiv Detail & Related papers (2022-06-19T00:44:22Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Local Propagation in Constraint-based Neural Network [77.37829055999238]
We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
arXiv Detail & Related papers (2020-02-18T16:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.