Learning in Sinusoidal Spaces with Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2109.09338v1
- Date: Mon, 20 Sep 2021 07:42:41 GMT
- Title: Learning in Sinusoidal Spaces with Physics-Informed Neural Networks
- Authors: Jian Cheng Wong, Chinchun Ooi, Abhishek Gupta, Yew-Soon Ong
- Abstract summary: A physics-informed neural network (PINN) uses physics-augmented loss functions to ensure its output is consistent with fundamental physics laws.
It turns out to be difficult to train an accurate PINN model for many problems in practice.
- Score: 22.47355575565345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A physics-informed neural network (PINN) uses physics-augmented loss
functions, e.g., incorporating the residual term from governing differential
equations, to ensure its output is consistent with fundamental physics laws.
However, it turns out to be difficult to train an accurate PINN model for many
problems in practice. In this paper, we address this issue through a novel
perspective on the merits of learning in sinusoidal spaces with PINNs. By
analyzing asymptotic behavior at model initialization, we first prove that a
PINN of increasing size (i.e., width and depth) induces a bias towards flat
outputs. Notably, a flat function is a trivial solution to many physics
differential equations, hence, deceptively minimizing the residual term of the
augmented loss while being far from the true solution. We then show that the
sinusoidal mapping of inputs, in an architecture we label as sf-PINN, is able
to elevate output variability, thus avoiding being trapped in the deceptive
local minimum. In addition, the level of variability can be effectively
modulated to match high-frequency patterns in the problem at hand. A key facet
of this paper is the comprehensive empirical study that demonstrates the
efficacy of learning in sinusoidal spaces with PINNs for a wide range of
forward and inverse modelling problems spanning multiple physics domains.
Related papers
- Improving PINNs By Algebraic Inclusion of Boundary and Initial Conditions [0.1874930567916036]
"AI for Science" aims to solve fundamental scientific problems using AI techniques.
In this work we explore the possibility of changing the model being trained from being just a neural network to being a non-linear transformation of it.
This reduces the number of terms in the loss function than the standard PINN losses.
arXiv Detail & Related papers (2024-07-30T11:19:48Z) - Collocation-based Robust Variational Physics-Informed Neural Networks (CRVPINN) [0.0]
Physics-Informed Neural Networks (PINNs) have been successfully applied to solve Partial Differential Equations (PDEs)
Recent work of Robust Variational Physics-Informed Neural Networks (RVPINNs) highlights the importance of conveniently translating the norms of the underlying continuum-level spaces to the discrete level.
In this work, we accelerate the implementation of RVPINN, establishing a LU factorization of sparse Gram matrix in a kind of point-collocation scheme with the same spirit as original PINNs.
arXiv Detail & Related papers (2024-01-04T14:42:29Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Reduced-PINN: An Integration-Based Physics-Informed Neural Networks for
Stiff ODEs [0.0]
Physics-informed neural networks (PINNs) have recently received much attention due to their capabilities in solving both forward and inverse problems.
We propose a new PINN architecture, called Reduced-PINN, that utilizes a reduced-order integration method to enable the PINN to solve stiff chemical kinetics.
arXiv Detail & Related papers (2022-08-23T09:20:42Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.