A Survey on Evidential Deep Learning For Single-Pass Uncertainty
Estimation
- URL: http://arxiv.org/abs/2110.03051v1
- Date: Wed, 6 Oct 2021 20:13:57 GMT
- Title: A Survey on Evidential Deep Learning For Single-Pass Uncertainty
Estimation
- Authors: Dennis Ulmer
- Abstract summary: Evidential Deep Learning: For unfamiliar data, they admit "what they don't know" and fall back onto a prior belief.
This survey aims to familiarize the reader with an alternative class of models based on the concept of Evidential Deep Learning: For unfamiliar data, they admit "what they don't know" and fall back onto a prior belief.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Popular approaches for quantifying predictive uncertainty in deep neural
networks often involve a set of weights or models, for instance via ensembling
or Monte Carlo Dropout. These techniques usually produce overhead by having to
train multiple model instances or do not produce very diverse predictions. This
survey aims to familiarize the reader with an alternative class of models based
on the concept of Evidential Deep Learning: For unfamiliar data, they admit
"what they don't know" and fall back onto a prior belief. Furthermore, they
allow uncertainty estimation in a single model and forward pass by
parameterizing distributions over distributions. This survey recapitulates
existing works, focusing on the implementation in a classification setting.
Finally, we survey the application of the same paradigm to regression problems.
We also provide a reflection on the strengths and weaknesses of the mentioned
approaches compared to existing ones and provide the most central theoretical
results in order to inform future research.
Related papers
- Unlearning or Concealment? A Critical Analysis and Evaluation Metrics for Unlearning in Diffusion Models [7.9993879763024065]
We show that the objective functions used for unlearning in the existing methods lead to decoupling of the targeted concepts for the corresponding prompts.
The ineffectiveness of current methods stems primarily from their narrow focus on reducing generation probabilities for specific prompt sets.
We introduce two new evaluation metrics: Concept Retrieval Score (CRS) and Concept Confidence Score (CCS)
arXiv Detail & Related papers (2024-09-09T14:38:31Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Identifying Drivers of Predictive Aleatoric Uncertainty [2.5311562666866494]
We present a simple approach to explain predictive aleatoric uncertainties.
We estimate uncertainty as predictive variance by adapting a neural network with a Gaussian output distribution.
We quantify our findings with a nuanced benchmark analysis that includes real-world datasets.
arXiv Detail & Related papers (2023-12-12T13:28:53Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Confidence estimation of classification based on the distribution of the
neural network output layer [4.529188601556233]
One of the most common problems preventing the application of prediction models in the real world is lack of generalization.
We propose novel methods that estimate uncertainty of particular predictions generated by a neural network classification model.
The proposed methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.
arXiv Detail & Related papers (2022-10-14T12:32:50Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - A General Framework for Distributed Inference with Uncertain Models [14.8884251609335]
We study the problem of distributed classification with a network of heterogeneous agents.
We build upon the concept of uncertain models to incorporate the agents' uncertainty in the likelihoods.
arXiv Detail & Related papers (2020-11-20T22:17:12Z) - Uncertainty-Aware (UNA) Bases for Deep Bayesian Regression Using
Multi-Headed Auxiliary Networks [23.100727871427367]
We show that traditional training procedures for Neural Linear Models drastically underestimate uncertainty on out-of-distribution inputs.
We propose a novel training framework that captures useful predictive uncertainties for downstream tasks.
arXiv Detail & Related papers (2020-06-21T02:46:05Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.