Knowledge from Uncertainty in Evidential Deep Learning
- URL: http://arxiv.org/abs/2310.12663v1
- Date: Thu, 19 Oct 2023 11:41:52 GMT
- Title: Knowledge from Uncertainty in Evidential Deep Learning
- Authors: Cai Davies, Marc Roig Vilamala, Alun D. Preece, Federico Cerutti,
Lance M. Kaplan, Supriyo Chakraborty
- Abstract summary: This work reveals an evidential signal that emerges from the uncertainty value in Evidential Deep Learning (EDL)
EDL is one example of a class of uncertainty-aware deep learning approaches designed to provide confidence about the current test sample.
- Score: 10.751990848772028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work reveals an evidential signal that emerges from the uncertainty
value in Evidential Deep Learning (EDL). EDL is one example of a class of
uncertainty-aware deep learning approaches designed to provide confidence (or
epistemic uncertainty) about the current test sample. In particular for
computer vision and bidirectional encoder large language models, the
`evidential signal' arising from the Dirichlet strength in EDL can, in some
cases, discriminate between classes, which is particularly strong when using
large language models. We hypothesise that the KL regularisation term causes
EDL to couple aleatoric and epistemic uncertainty. In this paper, we
empirically investigate the correlations between misclassification and
evaluated uncertainty, and show that EDL's `evidential signal' is due to
misclassification bias. We critically evaluate EDL with other Dirichlet-based
approaches, namely Generative Evidential Neural Networks (EDL-GEN) and Prior
Networks, and show theoretically and empirically the differences between these
loss functions. We conclude that EDL's coupling of uncertainty arises from
these differences due to the use (or lack) of out-of-distribution samples
during training.
Related papers
- Revisiting Essential and Nonessential Settings of Evidential Deep Learning [70.82728812001807]
Evidential Deep Learning (EDL) is an emerging method for uncertainty estimation.
We propose Re-EDL, a simplified yet more effective variant of EDL.
arXiv Detail & Related papers (2024-10-01T04:27:07Z) - Uncertainty Estimation by Density Aware Evidential Deep Learning [7.328039160501826]
Evidential deep learning (EDL) has shown remarkable success in uncertainty estimation.
We propose a novel method called Density Aware Evidential Deep Learning (DAEDL)
DAEDL integrates the feature space density of the testing example with the output of EDL during the prediction stage.
It demonstrates state-of-the-art performance across diverse downstream tasks related to uncertainty estimation and classification.
arXiv Detail & Related papers (2024-09-13T12:04:45Z) - Towards Robust Uncertainty-Aware Incomplete Multi-View Classification [11.617211995206018]
We propose the Alternating Progressive Learning Network (APLN) to enhance EDL-based methods in incomplete MVC scenarios.
APLN mitigates bias from corrupted observed data by first applying coarse imputation, followed by mapping the data to a latent space.
We also introduce a conflict-aware Dempster-Shafer combination rule (DSCR) to better handle conflicting evidence.
arXiv Detail & Related papers (2024-09-10T07:18:57Z) - A Comprehensive Survey on Evidential Deep Learning and Its Applications [64.83473301188138]
Evidential Deep Learning (EDL) provides reliable uncertainty estimation with minimal additional computation in a single forward pass.
We first delve into the theoretical foundation of EDL, the subjective logic theory, and discuss its distinctions from other uncertainty estimation frameworks.
We elaborate on its extensive applications across various machine learning paradigms and downstream tasks.
arXiv Detail & Related papers (2024-09-07T05:55:06Z) - The Epistemic Uncertainty Hole: an issue of Bayesian Neural Networks [0.6906005491572401]
We show that the evolution of "epistemic uncertainty metrics" regarding the model size and the size of the training set, goes against theoretical expectations.
This phenomenon, which we call "epistemic uncertainty hole", is all the more problematic as it undermines the entire applicative potential of BDL.
arXiv Detail & Related papers (2024-07-02T06:54:46Z) - Uncertainty Quantification for In-Context Learning of Large Language Models [52.891205009620364]
In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs)
We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties.
The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion.
arXiv Detail & Related papers (2024-02-15T18:46:24Z) - Are Uncertainty Quantification Capabilities of Evidential Deep Learning a Mirage? [35.15844215216846]
EDL methods are trained to learn a meta distribution over the predictive distribution by minimizing a specific objective function.
Recent studies identify limitations of the existing methods to conclude their learned uncertainties are unreliable.
We provide a sharper understanding of the behavior of a wide class of EDL methods by unifying various objective functions.
We conclude that even when EDL methods are empirically effective on downstream tasks, this occurs despite their poor uncertainty quantification capabilities.
arXiv Detail & Related papers (2024-02-09T03:23:39Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Adaptive Negative Evidential Deep Learning for Open-set Semi-supervised Learning [69.81438976273866]
Open-set semi-supervised learning (Open-set SSL) considers a more practical scenario, where unlabeled data and test data contain new categories (outliers) not observed in labeled data (inliers)
We introduce evidential deep learning (EDL) as an outlier detector to quantify different types of uncertainty, and design different uncertainty metrics for self-training and inference.
We propose a novel adaptive negative optimization strategy, making EDL more tailored to the unlabeled dataset containing both inliers and outliers.
arXiv Detail & Related papers (2023-03-21T09:07:15Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.