Uncertainty quantification for noisy inputs-outputs in physics-informed
neural networks and neural operators
- URL: http://arxiv.org/abs/2311.11262v1
- Date: Sun, 19 Nov 2023 08:18:26 GMT
- Title: Uncertainty quantification for noisy inputs-outputs in physics-informed
neural networks and neural operators
- Authors: Zongren Zou, Xuhui Meng, George Em Karniadakis
- Abstract summary: We introduce a Bayesian approach to quantify uncertainty arising from noisy inputs-outputs in neural networks (PINNs) and neural operators (NOs)
PINNs incorporate physics by including physics-informed terms via automatic differentiation, either in the loss function or the likelihood, and often take as input the spatial-temporal coordinate.
We show that this approach can be seamlessly integrated into PINNs and NOs, when they are employed to encode the physical information.
- Score: 2.07180164747172
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Uncertainty quantification (UQ) in scientific machine learning (SciML)
becomes increasingly critical as neural networks (NNs) are being widely adopted
in addressing complex problems across various scientific disciplines.
Representative SciML models are physics-informed neural networks (PINNs) and
neural operators (NOs). While UQ in SciML has been increasingly investigated in
recent years, very few works have focused on addressing the uncertainty caused
by the noisy inputs, such as spatial-temporal coordinates in PINNs and input
functions in NOs. The presence of noise in the inputs of the models can pose
significantly more challenges compared to noise in the outputs of the models,
primarily due to the inherent nonlinearity of most SciML algorithms. As a
result, UQ for noisy inputs becomes a crucial factor for reliable and
trustworthy deployment of these models in applications involving physical
knowledge. To this end, we introduce a Bayesian approach to quantify
uncertainty arising from noisy inputs-outputs in PINNs and NOs. We show that
this approach can be seamlessly integrated into PINNs and NOs, when they are
employed to encode the physical information. PINNs incorporate physics by
including physics-informed terms via automatic differentiation, either in the
loss function or the likelihood, and often take as input the spatial-temporal
coordinate. Therefore, the present method equips PINNs with the capability to
address problems where the observed coordinate is subject to noise. On the
other hand, pretrained NOs are also commonly employed as equation-free
surrogates in solving differential equations and Bayesian inverse problems, in
which they take functions as inputs. The proposed approach enables them to
handle noisy measurements for both input and output functions with UQ.
Related papers
- Response Estimation and System Identification of Dynamical Systems via Physics-Informed Neural Networks [0.0]
This paper explores the use of Physics-Informed Neural Networks (PINNs) for the identification and estimation of dynamical systems.
PINNs offer a unique advantage by embedding known physical laws directly into the neural network's loss function, allowing for simple embedding of complex phenomena.
The results demonstrate that PINNs deliver an efficient tool across all aforementioned tasks, even in presence of modelling errors.
arXiv Detail & Related papers (2024-10-02T08:58:30Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Equation identification for fluid flows via physics-informed neural networks [46.29203572184694]
We present a new benchmark problem for inverse PINNs based on a parametric sweep of the 2D Burgers' equation with rotational flow.
We show that a novel strategy that alternates between first- and second-order optimization proves superior to typical first-order strategies for estimating parameters.
arXiv Detail & Related papers (2024-08-30T13:17:57Z) - Correcting model misspecification in physics-informed neural networks
(PINNs) [2.07180164747172]
We present a general approach to correct the misspecified physical models in PINNs for discovering governing equations.
We employ other deep neural networks (DNNs) to model the discrepancy between the imperfect models and the observational data.
We envision that the proposed approach will extend the applications of PINNs for discovering governing equations in problems where the physico-chemical or biological processes are not well understood.
arXiv Detail & Related papers (2023-10-16T19:25:52Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Physics-informed neural networks for non-Newtonian fluid
thermo-mechanical problems: an application to rubber calendering process [0.0]
We present an application of PINNs to a non-Newtonian fluid thermo-mechanical problem which is often considered in the rubber calendering process.
We study the impact of the placement of the sensors and the distribution of unsupervised points on the performance of PINNs.
We also investigate the capability of PINNs to identify unknown physical parameters from the measurements captured by sensors.
arXiv Detail & Related papers (2022-01-31T17:54:44Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Interval and fuzzy physics-informed neural networks for uncertain fields [0.0]
Partial differential equations involving fuzzy and interval fields are traditionally solved using the finite element method.
In this work we utilize physics-informed neural networks (PINNs) to solve interval and fuzzy partial differential equations.
The resulting network structures termed interval physics-informed neural networks (iPINNs) and fuzzy physics-informed neural networks (fPINNs) show promising results.
arXiv Detail & Related papers (2021-06-18T21:06:42Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.