Bayesian deep learning framework for uncertainty quantification in high
dimensions
- URL: http://arxiv.org/abs/2210.11737v1
- Date: Fri, 21 Oct 2022 05:20:06 GMT
- Title: Bayesian deep learning framework for uncertainty quantification in high
dimensions
- Authors: Jeahan Jung, Minseok Choi
- Abstract summary: We develop a novel deep learning method for uncertainty quantification in partial differential equations based on Bayesian neural network (BNN) and Hamiltonian Monte Carlo (HMC)
A BNN efficiently learns the posterior distribution of the parameters in deep neural networks by performing Bayesian inference on the network parameters.
The posterior distribution is efficiently sampled using HMC to quantify uncertainties in the system.
- Score: 6.282068591820945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a novel deep learning method for uncertainty quantification in
stochastic partial differential equations based on Bayesian neural network
(BNN) and Hamiltonian Monte Carlo (HMC). A BNN efficiently learns the posterior
distribution of the parameters in deep neural networks by performing Bayesian
inference on the network parameters. The posterior distribution is efficiently
sampled using HMC to quantify uncertainties in the system. Several numerical
examples are shown for both forward and inverse problems in high dimension to
demonstrate the effectiveness of the proposed method for uncertainty
quantification. These also show promising results that the computational cost
is almost independent of the dimension of the problem demonstrating the
potential of the method for tackling the so-called curse of dimensionality.
Related papers
- A Compact Representation for Bayesian Neural Networks By Removing
Permutation Symmetry [22.229664343428055]
We show that the role of permutations can be meaningfully quantified by a number of transpositions metric.
We then show that the recently proposed rebasin method allows us to summarize HMC samples into a compact representation.
We show that this compact representation allows us to compare trained BNNs directly in weight space across sampling methods and variational inference.
arXiv Detail & Related papers (2023-12-31T23:57:05Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Learning Active Subspaces for Effective and Scalable Uncertainty
Quantification in Deep Neural Networks [13.388835540131508]
We propose a novel scheme for constructing a low-dimensional subspace of the neural network parameters.
We demonstrate that the significantly reduced active subspace enables effective and scalable Bayesian inference.
Our approach provides reliable predictions with robust uncertainty estimates for various regression tasks.
arXiv Detail & Related papers (2023-09-06T15:00:36Z) - Efficient Bayesian Physics Informed Neural Networks for Inverse Problems
via Ensemble Kalman Inversion [0.0]
We present a new efficient inference algorithm for B-PINNs that uses Ensemble Kalman Inversion (EKI) for high-dimensional inference tasks.
We find that our proposed method can achieve inference results with informative uncertainty estimates comparable to Hamiltonian Monte Carlo (HMC)-based B-PINNs with a much reduced computational cost.
arXiv Detail & Related papers (2023-03-13T18:15:26Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Efficient Variational Inference for Sparse Deep Learning with
Theoretical Guarantee [20.294908538266867]
Sparse deep learning aims to address the challenge of huge storage consumption by deep neural networks.
In this paper, we train sparse deep neural networks with a fully Bayesian treatment under spike-and-slab priors.
We develop a set of computationally efficient variational inferences via continuous relaxation of Bernoulli distribution.
arXiv Detail & Related papers (2020-11-15T03:27:54Z) - Efficient Uncertainty Quantification for Dynamic Subsurface Flow with
Surrogate by Theory-guided Neural Network [0.0]
We propose a methodology for efficient uncertainty quantification for dynamic subsurface flow with a surrogate constructed by the Theory-guided Neural Network (TgNN)
parameters, time and location comprise the input of the neural network, while the quantity of interest is the output.
The trained neural network can predict solutions of subsurface flow problems with new parameters.
arXiv Detail & Related papers (2020-04-25T12:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.