Evidential Uncertainty Quantification: A Variance-Based Perspective
- URL: http://arxiv.org/abs/2311.11367v1
- Date: Sun, 19 Nov 2023 16:33:42 GMT
- Title: Evidential Uncertainty Quantification: A Variance-Based Perspective
- Authors: Ruxiao Duan, Brian Caffo, Harrison X. Bai, Haris I. Sair, Craig Jones
- Abstract summary: We adapt the variance-based approach from regression to classification, quantifying classification uncertainty at the class level.
Experiments on cross-domain datasets are conducted to illustrate that the variance-based approach not only results in similar accuracy as the entropy-based one in active domain adaptation.
- Score: 0.43536523356694407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty quantification of deep neural networks has become an active field
of research and plays a crucial role in various downstream tasks such as active
learning. Recent advances in evidential deep learning shed light on the direct
quantification of aleatoric and epistemic uncertainties with a single forward
pass of the model. Most traditional approaches adopt an entropy-based method to
derive evidential uncertainty in classification, quantifying uncertainty at the
sample level. However, the variance-based method that has been widely applied
in regression problems is seldom used in the classification setting. In this
work, we adapt the variance-based approach from regression to classification,
quantifying classification uncertainty at the class level. The variance
decomposition technique in regression is extended to class covariance
decomposition in classification based on the law of total covariance, and the
class correlation is also derived from the covariance. Experiments on
cross-domain datasets are conducted to illustrate that the variance-based
approach not only results in similar accuracy as the entropy-based one in
active domain adaptation but also brings information about class-wise
uncertainties as well as between-class correlations. The code is available at
https://github.com/KerryDRX/EvidentialADA. This alternative means of evidential
uncertainty quantification will give researchers more options when class
uncertainties and correlations are important in their applications.
Related papers
- A method for classification of data with uncertainty using hypothesis testing [0.0]
It is necessary to quantify uncertainty and adopt decision-making approaches that take it into account.
We propose a new decision-making approach using two types of hypothesis testing.
This method is capable of detecting ambiguous data that belong to the overlapping regions of two class distributions.
arXiv Detail & Related papers (2025-02-12T17:14:07Z) - Instance-wise Uncertainty for Class Imbalance in Semantic Segmentation [4.147659576493158]
State of the art methods increasingly rely on deep learning models, known to incorrectly estimate uncertainty and be overconfident in predictions.
This is particularly problematic in semantic segmentation due to inherent class imbalance.
A novel training methodology specifically designed for semantic segmentation is presented.
arXiv Detail & Related papers (2024-07-17T14:38:32Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Benchmarking common uncertainty estimation methods with
histopathological images under domain shift and label noise [62.997667081978825]
In high-risk environments, deep learning models need to be able to judge their uncertainty and reject inputs when there is a significant chance of misclassification.
We conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images.
We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise.
arXiv Detail & Related papers (2023-01-03T11:34:36Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - Intra-Class Uncertainty Loss Function for Classification [6.523198497365588]
intra-class uncertainty/variability is not considered, especially for datasets containing unbalanced classes.
In our framework, the features extracted by deep networks of each class are characterized by independent Gaussian distribution.
The proposed approach shows improved classification performance, through learning a better class representation.
arXiv Detail & Related papers (2021-04-12T09:02:41Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.