Understanding Uncertainty in Bayesian Deep Learning
- URL: http://arxiv.org/abs/2106.13055v1
- Date: Fri, 21 May 2021 19:22:17 GMT
- Title: Understanding Uncertainty in Bayesian Deep Learning
- Authors: Cooper Lorsung
- Abstract summary: We show that traditional training procedures for NLMs can drastically underestimate uncertainty in data-scarce regions.
We propose a novel training method that can both capture useful predictive uncertainties as well as allow for incorporation of domain knowledge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Linear Models (NLM) are deep Bayesian models that produce predictive
uncertainty by learning features from the data and then performing Bayesian
linear regression over these features. Despite their popularity, few works have
focused on formally evaluating the predictive uncertainties of these models.
Furthermore, existing works point out the difficulties of encoding domain
knowledge in models like NLMs, making them unsuitable for applications where
interpretability is required. In this work, we show that traditional training
procedures for NLMs can drastically underestimate uncertainty in data-scarce
regions. We identify the underlying reasons for this behavior and propose a
novel training method that can both capture useful predictive uncertainties as
well as allow for incorporation of domain knowledge.
Related papers
- An Ambiguity Measure for Recognizing the Unknowns in Deep Learning [0.0]
We study the understanding of deep neural networks from the scope in which they are trained on.
We propose a measure for quantifying the ambiguity of inputs for any given model.
arXiv Detail & Related papers (2023-12-11T02:57:12Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Post-hoc Uncertainty Learning using a Dirichlet Meta-Model [28.522673618527417]
We propose a novel Bayesian meta-model to augment pre-trained models with better uncertainty quantification abilities.
Our proposed method requires no additional training data and is flexible enough to quantify different uncertainties.
We demonstrate our proposed meta-model approach's flexibility and superior empirical performance on these applications.
arXiv Detail & Related papers (2022-12-14T17:34:11Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen
Neural Networks [50.15201777970128]
We propose BayesCap that learns a Bayesian identity mapping for the frozen model, allowing uncertainty estimation.
BayesCap is a memory-efficient method that can be trained on a small fraction of the original dataset.
We show the efficacy of our method on a wide variety of tasks with a diverse set of architectures.
arXiv Detail & Related papers (2022-07-14T12:50:09Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Uncertainty-Aware (UNA) Bases for Deep Bayesian Regression Using
Multi-Headed Auxiliary Networks [23.100727871427367]
We show that traditional training procedures for Neural Linear Models drastically underestimate uncertainty on out-of-distribution inputs.
We propose a novel training framework that captures useful predictive uncertainties for downstream tasks.
arXiv Detail & Related papers (2020-06-21T02:46:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.