Quantifying Epistemic Uncertainty in Deep Learning
- URL: http://arxiv.org/abs/2110.12122v4
- Date: Sun, 18 Jun 2023 08:01:59 GMT
- Title: Quantifying Epistemic Uncertainty in Deep Learning
- Authors: Ziyi Huang, Henry Lam and Haofeng Zhang
- Abstract summary: Uncertainty quantification is at the core of the reliability and robustness of machine learning.
We provide a theoretical framework to dissect the uncertainty, especially the textitepistemic component, in deep learning.
We propose two approaches to estimate these uncertainties, one based on influence function and one on variability.
- Score: 15.494774321257939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty quantification is at the core of the reliability and robustness
of machine learning. In this paper, we provide a theoretical framework to
dissect the uncertainty, especially the \textit{epistemic} component, in deep
learning into \textit{procedural variability} (from the training procedure) and
\textit{data variability} (from the training data), which is the first such
attempt in the literature to our best knowledge. We then propose two approaches
to estimate these uncertainties, one based on influence function and one on
batching. We demonstrate how our approaches overcome the computational
difficulties in applying classical statistical methods. Experimental
evaluations on multiple problem settings corroborate our theory and illustrate
how our framework and estimation can provide direct guidance on modeling and
data collection efforts.
Related papers
- Navigating Uncertainties in Machine Learning for Structural Dynamics: A Comprehensive Review of Probabilistic and Non-Probabilistic Approaches in Forward and Inverse Problems [0.0]
This paper presents a comprehensive review on navigating uncertainties in machine learning (ML)
It lists uncertainty-aware approaches into probabilistic methods and non-probabilistic methods.
The review aims to assist researchers and practitioners in making informed decisions when utilizing ML techniques to address uncertainties in structural dynamic problems.
arXiv Detail & Related papers (2024-08-16T09:43:01Z) - Learning Confidence Bounds for Classification with Imbalanced Data [42.690254618937196]
We propose a novel framework that leverages learning theory and concentration inequalities to overcome the shortcomings of traditional solutions.
Our method can effectively adapt to the varying degrees of imbalance across different classes, resulting in more robust and reliable classification outcomes.
arXiv Detail & Related papers (2024-07-16T16:02:27Z) - Self-Improving Interference Management Based on Deep Learning With
Uncertainty Quantification [10.403513606082067]
This paper presents a self-improving interference management framework tailored for wireless communications.
Our approach addresses the computational challenges inherent in traditional optimization-based algorithms.
A breakthrough of our framework is its acknowledgment of the limitations inherent in data-driven models.
arXiv Detail & Related papers (2024-01-24T03:28:48Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Theoretical Foundations of Adversarially Robust Learning [7.589246500826111]
Current machine learning systems have been shown to be brittle against adversarial examples.
In this thesis, we explore what robustness properties can we hope to guarantee against adversarial examples.
arXiv Detail & Related papers (2023-06-13T12:20:55Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Elucidating Noisy Data via Uncertainty-Aware Robust Learning [9.711326718689495]
Our proposed method can learn the clean target distribution from a dirty dataset.
We leverage a mixture-of-experts model that can distinguish two different types of predictive uncertainty.
We present a novel validation scheme for evaluating the performance of the corruption pattern estimation.
arXiv Detail & Related papers (2021-11-02T14:44:50Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
Learning [70.72363097550483]
In this study, we focus on in-domain uncertainty for image classification.
To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE)
arXiv Detail & Related papers (2020-02-15T23:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.