On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks
- URL: http://arxiv.org/abs/2012.10047v1
- Date: Fri, 18 Dec 2020 04:19:30 GMT
- Title: On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks
- Authors: Sifan Wang, Hanwen Wang, Paris Perdikaris
- Abstract summary: We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed neural networks (PINNs) are demonstrating remarkable promise
in integrating physical models with gappy and noisy observational data, but
they still struggle in cases where the target functions to be approximated
exhibit high-frequency or multi-scale features. In this work we investigate
this limitation through the lens of Neural Tangent Kernel (NTK) theory and
elucidate how PINNs are biased towards learning functions along the dominant
eigen-directions of their limiting NTK. Using this observation, we construct
novel architectures that employ spatio-temporal and multi-scale random Fourier
features, and justify how such coordinate embedding layers can lead to robust
and accurate PINN models. Numerical examples are presented for several
challenging cases where conventional PINN models fail, including wave
propagation and reaction-diffusion dynamics, illustrating how the proposed
methods can be used to effectively tackle both forward and inverse problems
involving partial differential equations with multi-scale behavior. All code an
data accompanying this manuscript will be made publicly available at
\url{https://github.com/PredictiveIntelligenceLab/MultiscalePINNs}.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Feature Mapping in Physics-Informed Neural Networks (PINNs) [1.9819034119774483]
We study the training dynamics of PINNs with a feature mapping layer via the limiting Conjugate Kernel and Neural Tangent Kernel.
We propose conditionally positive definite Radial Basis Function as a better alternative.
arXiv Detail & Related papers (2024-02-10T13:51:09Z) - Multifidelity domain decomposition-based physics-informed neural networks and operators for time-dependent problems [40.46280139210502]
A combination of multifidelity stacking PINNs and domain decomposition-based finite basis PINNs is employed.
It can be observed that the domain decomposition approach clearly improves the PINN and stacking PINN approaches.
It is demonstrated that the FBPINN approach can be extended to multifidelity physics-informed deep operator networks.
arXiv Detail & Related papers (2024-01-15T18:32:53Z) - $\Delta$-PINNs: physics-informed neural networks on complex geometries [2.1485350418225244]
Physics-informed neural networks (PINNs) have demonstrated promise in solving forward and inverse problems involving partial differential equations.
To date, there is no clear way to inform PINNs about the topology of the domain where the problem is being solved.
We propose a novel positional encoding mechanism for PINNs based on the eigenfunctions of the Laplace-Beltrami operator.
arXiv Detail & Related papers (2022-09-08T18:03:19Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Power and limitations of single-qubit native quantum neural networks [5.526775342940154]
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
We formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks.
arXiv Detail & Related papers (2022-05-16T17:58:27Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Learning in Sinusoidal Spaces with Physics-Informed Neural Networks [22.47355575565345]
A physics-informed neural network (PINN) uses physics-augmented loss functions to ensure its output is consistent with fundamental physics laws.
It turns out to be difficult to train an accurate PINN model for many problems in practice.
arXiv Detail & Related papers (2021-09-20T07:42:41Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.