Bayesian Deep Learning for Partial Differential Equation Parameter
Discovery with Sparse and Noisy Data
- URL: http://arxiv.org/abs/2108.04085v1
- Date: Thu, 5 Aug 2021 19:43:15 GMT
- Title: Bayesian Deep Learning for Partial Differential Equation Parameter
Discovery with Sparse and Noisy Data
- Authors: Christophe Bonneville, Christopher J. Earls
- Abstract summary: We propose to use Bayesian neural networks (BNN) in order to recover the full system states from measurement data.
We show that it is possible to accurately capture physics of varying complexity without overfitting.
We demonstrate our approach on a handful of example applied to physics and non-linear dynamics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scientific machine learning has been successfully applied to inverse problems
and PDE discoveries in computational physics. One caveat of current methods
however is the need for large amounts of (clean) data in order to recover full
system responses or underlying physical models. Bayesian methods may be
particularly promising to overcome these challenges as they are naturally less
sensitive to sparse and noisy data. In this paper, we propose to use Bayesian
neural networks (BNN) in order to: 1) Recover the full system states from
measurement data (e.g. temperature, velocity field, etc.). We use Hamiltonian
Monte-Carlo to sample the posterior distribution of a deep and dense BNN, and
show that it is possible to accurately capture physics of varying complexity
without overfitting. 2) Recover the parameters in the underlying partial
differential equation (PDE) governing the physical system. Using the trained
BNN as a surrogate of the system response, we generate datasets of derivatives
potentially comprising the latent PDE of the observed system and perform a
Bayesian linear regression (BLR) between the successive derivatives in space
and time to recover the original PDE parameters. We take advantage of the
confidence intervals on the BNN outputs and introduce the spatial derivative
variance into the BLR likelihood to discard the influence of highly uncertain
surrogate data points, which allows for more accurate parameter discovery. We
demonstrate our approach on a handful of example applied to physics and
non-linear dynamics.
Related papers
- Physics-constrained robust learning of open-form partial differential equations from limited and noisy data [1.50528618730365]
This study proposes a framework to robustly uncover open-form partial differential equations (PDEs) from limited and noisy data.
A neural network-based predictive model fits the system response and serves as the reward evaluator for the generated PDEs.
Numerical experiments demonstrate our framework's capability to uncover governing equations from nonlinear dynamic systems with limited and highly noisy data.
arXiv Detail & Related papers (2023-09-14T12:34:42Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Robust Regression with Highly Corrupted Data via Physics Informed Neural
Networks [4.642273921499256]
Physics-informed neural networks (PINNs) have been proposed to solve two main classes of problems.
We show the generalizability, accuracy and efficiency of the proposed algorithms for recovering governing equations from noisy and corrupted measurement data.
arXiv Detail & Related papers (2022-10-19T15:21:05Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Learning Functional Priors and Posteriors from Data and Physics [3.537267195871802]
We develop a new framework based on deep neural networks to be able to extrapolate in space-time using historical data.
We employ the physics-informed Generative Adversarial Networks (PI-GAN) to learn a functional prior.
At the second stage, we employ the Hamiltonian Monte Carlo (HMC) method to estimate the posterior in the latent space of PI-GANs.
arXiv Detail & Related papers (2021-06-08T03:03:24Z) - Physics-aware deep neural networks for surrogate modeling of turbulent
natural convection [0.0]
We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-B'enard convection flows.
We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs.
The predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% -- 4%] in the relative L 2 norm.
arXiv Detail & Related papers (2021-03-05T09:48:57Z) - Multi-fidelity Bayesian Neural Networks: Algorithms and Applications [0.0]
We propose a new class of Bayesian neural networks (BNNs) that can be trained using noisy data of variable fidelity.
We apply them to learn function approximations as well as to solve inverse problems based on partial differential equations (PDEs)
arXiv Detail & Related papers (2020-12-19T02:03:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.