Decomposing Representations for Deterministic Uncertainty Estimation
- URL: http://arxiv.org/abs/2112.00856v1
- Date: Wed, 1 Dec 2021 22:12:01 GMT
- Title: Decomposing Representations for Deterministic Uncertainty Estimation
- Authors: Haiwen Huang, Joost van Amersfoort, Yarin Gal
- Abstract summary: We show that current feature density based uncertainty estimators cannot perform well consistently across different OoD detection settings.
We propose to decompose the learned representations and integrate the uncertainties estimated on them separately.
- Score: 34.11413246048065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty estimation is a key component in any deployed machine learning
system. One way to evaluate uncertainty estimation is using
"out-of-distribution" (OoD) detection, that is, distinguishing between the
training data distribution and an unseen different data distribution using
uncertainty. In this work, we show that current feature density based
uncertainty estimators cannot perform well consistently across different OoD
detection settings. To solve this, we propose to decompose the learned
representations and integrate the uncertainties estimated on them separately.
Through experiments, we demonstrate that we can greatly improve the performance
and the interpretability of the uncertainty estimation.
Related papers
- Probabilistic Contrastive Learning with Explicit Concentration on the Hypersphere [3.572499139455308]
This paper introduces a new perspective on incorporating uncertainty into contrastive learning by embedding representations within a spherical space.
We leverage the concentration parameter, kappa, as a direct, interpretable measure to quantify uncertainty explicitly.
arXiv Detail & Related papers (2024-05-26T07:08:13Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Learning the Distribution of Errors in Stereo Matching for Joint
Disparity and Uncertainty Estimation [8.057006406834466]
We present a new loss function for joint disparity and uncertainty estimation in deep stereo matching.
We experimentally assess the effectiveness of our approach and observe significant improvements in both disparity and uncertainty prediction on large datasets.
arXiv Detail & Related papers (2023-03-31T21:58:19Z) - How certain are your uncertainties? [0.3655021726150368]
Measures of uncertainty in the output of a deep learning method are useful in several ways.
This work investigates the stability of these uncertainty measurements, in terms of both magnitude and spatial pattern.
arXiv Detail & Related papers (2022-03-01T05:25:02Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.