Variational Bayes Neural Network: Posterior Consistency, Classification
Accuracy and Computational Challenges
- URL: http://arxiv.org/abs/2011.09592v1
- Date: Thu, 19 Nov 2020 00:11:27 GMT
- Title: Variational Bayes Neural Network: Posterior Consistency, Classification
Accuracy and Computational Challenges
- Authors: Shrijita Bhattacharya, Zihuan Liu, Tapabrata Maiti
- Abstract summary: This paper develops a variational Bayesian neural network estimation methodology and related statistical theory.
The development is motivated by an important biomedical engineering application, namely building predictive tools for the transition from mild cognitive impairment to Alzheimer's disease.
- Score: 0.3867363075280544
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Bayesian neural network models (BNN) have re-surged in recent years due to
the advancement of scalable computations and its utility in solving complex
prediction problems in a wide variety of applications. Despite the popularity
and usefulness of BNN, the conventional Markov Chain Monte Carlo based
implementation suffers from high computational cost, limiting the use of this
powerful technique in large scale studies. The variational Bayes inference has
become a viable alternative to circumvent some of the computational issues.
Although the approach is popular in machine learning, its application in
statistics is somewhat limited. This paper develops a variational Bayesian
neural network estimation methodology and related statistical theory. The
numerical algorithms and their implementational are discussed in detail. The
theory for posterior consistency, a desirable property in nonparametric
Bayesian statistics, is also developed. This theory provides an assessment of
prediction accuracy and guidelines for characterizing the prior distributions
and variational family. The loss of using a variational posterior over the true
posterior has also been quantified. The development is motivated by an
important biomedical engineering application, namely building predictive tools
for the transition from mild cognitive impairment to Alzheimer's disease. The
predictors are multi-modal and may involve complex interactive relations.
Related papers
- Amortised Inference in Bayesian Neural Networks [0.0]
We introduce the Amortised Pseudo-Observation Variational Inference Bayesian Neural Network (APOVI-BNN)
We show that the amortised inference is of similar or better quality to those obtained through traditional variational inference.
We then discuss how the APOVI-BNN may be viewed as a new member of the neural process family.
arXiv Detail & Related papers (2023-09-06T14:02:33Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Efficient Bayes Inference in Neural Networks through Adaptive Importance
Sampling [19.518237361775533]
In BNNs, a complete posterior distribution of the unknown weight and bias parameters of the network is produced during the training stage.
This feature is useful in countless machine learning applications.
It is particularly appealing in areas where decision-making has a crucial impact, such as medical healthcare or autonomous driving.
arXiv Detail & Related papers (2022-10-03T14:59:23Z) - Single Model Uncertainty Estimation via Stochastic Data Centering [39.71621297447397]
We are interested in estimating the uncertainties of deep neural networks.
We present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models.
We show that $Delta-$UQ's uncertainty estimates are superior to many of the current methods on a variety of benchmarks.
arXiv Detail & Related papers (2022-07-14T23:54:54Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels [93.58854458951431]
We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
arXiv Detail & Related papers (2020-09-16T15:16:03Z) - Statistical Foundation of Variational Bayes Neural Networks [0.456877715768796]
Variational Bayes (VB) provides a useful alternative to circumvent the computational cost and time complexity associated with the generation of samples from the true posterior.
This paper establishes the fundamental result of posterior consistency for the mean-field variational posterior (VP) for a feed-forward artificial neural network model.
arXiv Detail & Related papers (2020-06-29T03:04:18Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.