Discretization-Induced Dirichlet Posterior for Robust Uncertainty
Quantification on Regression
- URL: http://arxiv.org/abs/2308.09065v2
- Date: Wed, 13 Dec 2023 18:01:23 GMT
- Title: Discretization-Induced Dirichlet Posterior for Robust Uncertainty
Quantification on Regression
- Authors: Xuanlong Yu, Gianni Franchi, Jindong Gu, Emanuel Aldea
- Abstract summary: Uncertainty quantification is critical for deploying deep neural networks (DNNs) in real-world applications.
For vision regression tasks, current AuxUE designs are mainly adopted for aleatoric uncertainty estimates.
We propose a generalized AuxUE scheme for more robust uncertainty quantification on regression tasks.
- Score: 17.49026509916207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty quantification is critical for deploying deep neural networks
(DNNs) in real-world applications. An Auxiliary Uncertainty Estimator (AuxUE)
is one of the most effective means to estimate the uncertainty of the main task
prediction without modifying the main task model. To be considered robust, an
AuxUE must be capable of maintaining its performance and triggering higher
uncertainties while encountering Out-of-Distribution (OOD) inputs, i.e., to
provide robust aleatoric and epistemic uncertainty. However, for vision
regression tasks, current AuxUE designs are mainly adopted for aleatoric
uncertainty estimates, and AuxUE robustness has not been explored. In this
work, we propose a generalized AuxUE scheme for more robust uncertainty
quantification on regression tasks. Concretely, to achieve a more robust
aleatoric uncertainty estimation, different distribution assumptions are
considered for heteroscedastic noise, and Laplace distribution is finally
chosen to approximate the prediction error. For epistemic uncertainty, we
propose a novel solution named Discretization-Induced Dirichlet pOsterior
(DIDO), which models the Dirichlet posterior on the discretized prediction
error. Extensive experiments on age estimation, monocular depth estimation, and
super-resolution tasks show that our proposed method can provide robust
uncertainty estimates in the face of noisy inputs and that it can be scalable
to both image-level and pixel-wise tasks. Code is available at
https://github.com/ENSTA-U2IS/DIDO .
Related papers
- Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - Density Uncertainty Layers for Reliable Uncertainty Estimation [20.867449366086237]
Assessing the predictive uncertainty of deep neural networks is crucial for safety-related applications of deep learning.
We propose a novel criterion for reliable predictive uncertainty: a model's predictive variance should be grounded in the empirical density of the input.
Compared to existing approaches, density uncertainty layers provide more reliable uncertainty estimates and robust out-of-distribution detection performance.
arXiv Detail & Related papers (2023-06-21T18:12:58Z) - Integrating Uncertainty into Neural Network-based Speech Enhancement [27.868722093985006]
Supervised masking approaches in the time-frequency domain aim to employ deep neural networks to estimate a multiplicative mask to extract clean speech.
This leads to a single estimate for each input without any guarantees or measures of reliability.
We study the benefits of modeling uncertainty in clean speech estimation.
arXiv Detail & Related papers (2023-05-15T15:55:12Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks [11.929914721626849]
We show that state-of-the-art uncertainty estimation algorithms could fail catastrophically under our proposed adversarial attack.
In particular, we aim at attacking the out-domain uncertainty estimation.
arXiv Detail & Related papers (2022-10-03T23:33:38Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.