Information-theoretic Analysis of Test Data Sensitivity in Uncertainty
- URL: http://arxiv.org/abs/2307.12456v1
- Date: Sun, 23 Jul 2023 23:42:06 GMT
- Title: Information-theoretic Analysis of Test Data Sensitivity in Uncertainty
- Authors: Futoshi Futami, Tomoharu Iwata
- Abstract summary: A recent analysis by Xu and Raginsky 2022 rigorously decomposed the predictive uncertainty in Bayesian inference into two uncertainties.
They analyzed those uncertainties in an information-theoretic way, assuming that the model is well-specified and treating the model's parameters as latent variables.
In this work, we study such uncertainty sensitivity using our novel decomposition method for the predictive uncertainty.
- Score: 32.27260899727789
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian inference is often utilized for uncertainty quantification tasks. A
recent analysis by Xu and Raginsky 2022 rigorously decomposed the predictive
uncertainty in Bayesian inference into two uncertainties, called aleatoric and
epistemic uncertainties, which represent the inherent randomness in the
data-generating process and the variability due to insufficient data,
respectively. They analyzed those uncertainties in an information-theoretic
way, assuming that the model is well-specified and treating the model's
parameters as latent variables. However, the existing information-theoretic
analysis of uncertainty cannot explain the widely believed property of
uncertainty, known as the sensitivity between the test and training data. It
implies that when test data are similar to training data in some sense, the
epistemic uncertainty should become small. In this work, we study such
uncertainty sensitivity using our novel decomposition method for the predictive
uncertainty. Our analysis successfully defines such sensitivity using
information-theoretic quantities. Furthermore, we extend the existing analysis
of Bayesian meta-learning and show the novel sensitivities among tasks for the
first time.
Related papers
- Uncertainty Decomposition and Error Margin Detection of Homodyned-K Distribution in Quantitative Ultrasound [1.912429179274357]
Homodyned K-distribution (HK-distribution) parameter estimation in quantitative ultrasound (QUS) has been recently addressed using Bayesian Neural Networks (BNNs)
BNNs have been shown to significantly reduce computational time in speckle statistics-based QUS without compromising accuracy and precision.
arXiv Detail & Related papers (2024-09-17T22:16:49Z) - How disentangled are your classification uncertainties? [6.144680854063938]
Uncertainty Quantification in Machine Learning has progressed to predicting the source of uncertainty in a prediction.
This work proposes a set of experiments to evaluate disentanglement of aleatoric and epistemic uncertainty.
arXiv Detail & Related papers (2024-08-22T07:42:43Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Identifying Drivers of Predictive Aleatoric Uncertainty [2.5311562666866494]
We present a simple approach to explain predictive aleatoric uncertainties.
We estimate uncertainty as predictive variance by adapting a neural network with a Gaussian output distribution.
We quantify our findings with a nuanced benchmark analysis that includes real-world datasets.
arXiv Detail & Related papers (2023-12-12T13:28:53Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.