Novel Deep neural networks for solving Bayesian statistical inverse
- URL: http://arxiv.org/abs/2102.03974v1
- Date: Mon, 8 Feb 2021 02:54:46 GMT
- Title: Novel Deep neural networks for solving Bayesian statistical inverse
- Authors: Harbir Antil, Howard C Elman, Akwum Onwunta, Deepanshu Verma
- Abstract summary: We introduce a fractional deep neural network based approach for the forward solves within an MCMC routine.
We discuss some approximation error estimates and illustrate the efficiency of our approach via several numerical examples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the simulation of Bayesian statistical inverse problems governed
by large-scale linear and nonlinear partial differential equations (PDEs).
Markov chain Monte Carlo (MCMC) algorithms are standard techniques to solve
such problems. However, MCMC techniques are computationally challenging as they
require several thousands of forward PDE solves. The goal of this paper is to
introduce a fractional deep neural network based approach for the forward
solves within an MCMC routine. Moreover, we discuss some approximation error
estimates and illustrate the efficiency of our approach via several numerical
examples.
Related papers
- Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Bayesian polynomial neural networks and polynomial neural ordinary
differential equations [4.550705124365277]
Symbolic regression with neural networks and neural ordinary differential equations (ODEs) are powerful approaches for equation recovery of many science and engineering problems.
These methods provide point estimates for the model parameters and are currently unable to accommodate noisy data.
We address this challenge by developing and validating the following inference methods: the Laplace approximation, Markov Chain Monte Carlo sampling methods, and Bayesian variational inference.
arXiv Detail & Related papers (2023-08-17T05:42:29Z) - Large-scale Bayesian Structure Learning for Gaussian Graphical Models using Marginal Pseudo-likelihood [0.26249027950824516]
We introduce two novel Markov chain Monte Carlo (MCMC) search algorithms with a significantly lower computational cost than leading Bayesian approaches.
These algorithms can deliver reliable results in mere minutes on standard computers, even for large-scale problems with one thousand variables.
We also illustrate the practical utility of our methods on medium and large-scale applications from human and mice gene expression studies.
arXiv Detail & Related papers (2023-06-30T20:37:40Z) - Bayesian neural networks via MCMC: a Python-based tutorial [0.196629787330046]
Variational inference and Markov Chain Monte-Carlo sampling methods are used to implement Bayesian inference.
This tutorial provides code in Python with data and instructions that enable their use and extension.
arXiv Detail & Related papers (2023-04-02T02:19:15Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - A Neural Network Approach for Homogenization of Multiscale Problems [1.6244541005112747]
We propose a neural network-based approach to the homogenization of multiscale problems.
The proposed method incorporates Brownian walkers to find the macroscopic description of a multiscale PDE solution.
We validate the efficiency and robustness of the proposed method through a suite of linear and nonlinear multiscale problems.
arXiv Detail & Related papers (2022-06-04T17:50:00Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Scaling Up Bayesian Uncertainty Quantification for Inverse Problems
using Deep Neural Networks [2.455468619225742]
We propose a novel CES approach for Bayesian inference based on deep neural network (DNN) models for the emulation phase.
The resulting algorithm is not only computationally more efficient, but also less sensitive to the training set.
Overall, our method, henceforth called emphReduced- Dimension Emulative Autoencoder Monte Carlo (DREAM) algorithm, is able to scale Bayesian UQ up to thousands of dimensions in physics-constrained inverse problems.
arXiv Detail & Related papers (2021-01-11T14:18:38Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.