Physics-informed neural networks for operator equations with stochastic data
- URL: http://arxiv.org/abs/2211.10344v2
- Date: Fri, 3 May 2024 21:35:02 GMT
- Title: Physics-informed neural networks for operator equations with stochastic data
- Authors: Paul Escapil-Inchauspé, Gonzalo A. Ruz,
- Abstract summary: We consider the computation of statistical moments to operator equations with data.
PINNs -- referred to as TPINNs -- allow to solve the induced tensor operator equations under minimal changes of existing PINNs code.
We propose two types of architectures, referred to as vanilla and multi-output TPINNs, and investigate their benefits and limitations.
- Score: 0.6445605125467572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the computation of statistical moments to operator equations with stochastic data. We remark that application of PINNs -- referred to as TPINNs -- allows to solve the induced tensor operator equations under minimal changes of existing PINNs code, and enabling handling of non-linear and time-dependent operators. We propose two types of architectures, referred to as vanilla and multi-output TPINNs, and investigate their benefits and limitations. Exhaustive numerical experiments are performed; demonstrating applicability and performance; raising a variety of new promising research avenues.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Parametric Learning of Time-Advancement Operators for Unstable Flame
Evolution [0.0]
This study investigates the application of machine learning to learn time-advancement operators for parametric partial differential equations (PDEs)
Our focus is on extending existing operator learning methods to handle additional inputs representing PDE parameters.
The goal is to create a unified learning approach that accurately predicts short-term solutions and provides robust long-term statistics.
arXiv Detail & Related papers (2024-02-14T18:12:42Z) - Physics informed WNO [0.0]
We propose a physics-informed Wavelet Operator (WNO) for learning the solution operators of families of parametric partial differential equations (PDEs) without labeled training data.
The efficacy of the framework is validated and illustrated with four nonlinear neural systems relevant to various fields of engineering and science.
arXiv Detail & Related papers (2023-02-12T14:31:50Z) - Physics-guided Data Augmentation for Learning the Solution Operator of
Linear Differential Equations [2.1850269949775663]
We propose a physics-guided data augmentation (PGDA) method to improve the accuracy and generalization of neural operator models.
We demonstrate the advantage of PGDA on a variety of linear differential equations, showing that PGDA can improve the sample complexity and is robust to distributional shift.
arXiv Detail & Related papers (2022-12-08T06:29:15Z) - Pseudo-Differential Neural Operator: Generalized Fourier Neural Operator
for Learning Solution Operators of Partial Differential Equations [14.43135909469058]
We propose a novel textitpseudo-differential integral operator (PDIO) to analyze and generalize the Fourier integral operator in FNO.
We experimentally validate the effectiveness of the proposed model by utilizing Darcy flow and the Navier-Stokes equation.
arXiv Detail & Related papers (2022-01-28T07:22:32Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.