Bayesian Physics Informed Neural Networks for Linear Inverse problems
- URL: http://arxiv.org/abs/2502.13827v1
- Date: Tue, 18 Feb 2025 14:52:57 GMT
- Title: Bayesian Physics Informed Neural Networks for Linear Inverse problems
- Authors: Ali Mohammad-Djafari,
- Abstract summary: Inverse problems arise in science and engineering where we need to infer on a quantity from indirect observation.
BPINN concept integrates physical laws with deep learning techniques to enhance the speed, accuracy and efficiency.
We consider two cases of supervised and unsupervised for training step, obtain the expressions of the posterior probability of the unknown variables, and deduce the posterior laws of the NN's parameters.
- Score: 1.30536490219656
- License:
- Abstract: Inverse problems arise almost everywhere in science and engineering where we need to infer on a quantity from indirect observation. The cases of medical, biomedical, and industrial imaging systems are the typical examples. A very high overview of classification of the inverse problems method can be: i) Analytical, ii) Regularization, and iii) Bayesian inference methods. Even if there are straight links between them, we can say that the Bayesian inference based methods are the most powerful, as they give the possibility of accounting for prior knowledge and can account for errors and uncertainties in general. One of the main limitations stay in computational costs in particular for high dimensional imaging systems. Neural Networks (NN), and in particular Deep NNs (DNN), have been considered as a way to push farther this limit. Physics Informed Neural Networks (PINN) concept integrates physical laws with deep learning techniques to enhance the speed, accuracy and efficiency of the above mentioned problems. In this work, a new Bayesian framework for the concept of PINN (BPINN) is presented and discussed which includes the deterministic one if we use the Maximum A Posteriori (MAP) estimation framework. We consider two cases of supervised and unsupervised for training step, obtain the expressions of the posterior probability of the unknown variables, and deduce the posterior laws of the NN's parameters. We also discuss about the challenges of implementation of these methods in real applications.
Related papers
- Improving PINNs By Algebraic Inclusion of Boundary and Initial Conditions [0.1874930567916036]
"AI for Science" aims to solve fundamental scientific problems using AI techniques.
In this work we explore the possibility of changing the model being trained from being just a neural network to being a non-linear transformation of it.
This reduces the number of terms in the loss function than the standard PINN losses.
arXiv Detail & Related papers (2024-07-30T11:19:48Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Correctness Verification of Neural Networks Approximating Differential
Equations [0.0]
Neural Networks (NNs) approximate the solution of Partial Differential Equations (PDEs)
NNs can become integral parts of simulation software tools which can accelerate the simulation of complex dynamic systems more than 100 times.
This work addresses the verification of these functions by defining the NN derivative as a finite difference approximation.
For the first time, we tackle the problem of bounding an NN function without a priori knowledge of the output domain.
arXiv Detail & Related papers (2024-02-12T12:55:35Z) - Deep Learning and Bayesian inference for Inverse Problems [8.315530799440554]
We focus on NN, DL and more specifically the Bayesian DL particularly adapted for inverse problems.
We consider two cases: First the case where the forward operator is known and used as physics constraint, the second more general data driven DL methods.
arXiv Detail & Related papers (2023-08-28T04:27:45Z) - Efficient Bayesian Physics Informed Neural Networks for Inverse Problems
via Ensemble Kalman Inversion [0.0]
We present a new efficient inference algorithm for B-PINNs that uses Ensemble Kalman Inversion (EKI) for high-dimensional inference tasks.
We find that our proposed method can achieve inference results with informative uncertainty estimates comparable to Hamiltonian Monte Carlo (HMC)-based B-PINNs with a much reduced computational cost.
arXiv Detail & Related papers (2023-03-13T18:15:26Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Error-Aware B-PINNs: Improving Uncertainty Quantification in Bayesian
Physics-Informed Neural Networks [2.569295887779268]
Uncertainty Quantification (UQ) is just beginning to emerge in the context of PINNs.
We propose a framework for UQ in Bayesian PINNs (B-PINNs) that incorporates the discrepancy between the B-PINN solution and the unknown true solution.
We exploit recent results on error bounds for PINNs on linear dynamical systems and demonstrate the predictive uncertainty on a class of linear ODEs.
arXiv Detail & Related papers (2022-12-14T01:15:26Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.