On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks
- URL: http://arxiv.org/abs/2210.02191v1
- Date: Mon, 3 Oct 2022 23:33:38 GMT
- Title: On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks
- Authors: Huimin Zeng, Zhenrui Yue, Yang Zhang, Ziyi Kou, Lanyu Shang, Dong Wang
- Abstract summary: We show that state-of-the-art uncertainty estimation algorithms could fail catastrophically under our proposed adversarial attack.
In particular, we aim at attacking the out-domain uncertainty estimation.
- Score: 11.929914721626849
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many applications with real-world consequences, it is crucial to develop
reliable uncertainty estimation for the predictions made by the AI decision
systems. Targeting at the goal of estimating uncertainty, various deep neural
network (DNN) based uncertainty estimation algorithms have been proposed.
However, the robustness of the uncertainty returned by these algorithms has not
been systematically explored. In this work, to raise the awareness of the
research community on robust uncertainty estimation, we show that
state-of-the-art uncertainty estimation algorithms could fail catastrophically
under our proposed adversarial attack despite their impressive performance on
uncertainty estimation. In particular, we aim at attacking the out-domain
uncertainty estimation: under our attack, the uncertainty model would be fooled
to make high-confident predictions for the out-domain data, which they
originally would have rejected. Extensive experimental results on various
benchmark image datasets show that the uncertainty estimated by
state-of-the-art methods could be easily corrupted by our attack.
Related papers
- One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Density Uncertainty Layers for Reliable Uncertainty Estimation [20.867449366086237]
Assessing the predictive uncertainty of deep neural networks is crucial for safety-related applications of deep learning.
We propose a novel criterion for reliable predictive uncertainty: a model's predictive variance should be grounded in the empirical density of the input.
Compared to existing approaches, density uncertainty layers provide more reliable uncertainty estimates and robust out-of-distribution detection performance.
arXiv Detail & Related papers (2023-06-21T18:12:58Z) - Gradient-based Uncertainty Attribution for Explainable Bayesian Deep
Learning [38.34033824352067]
Predictions made by deep learning models are prone to data perturbations, adversarial attacks, and out-of-distribution inputs.
We propose to develop explainable and actionable Bayesian deep learning methods to perform accurate uncertainty quantification.
arXiv Detail & Related papers (2023-04-10T19:14:15Z) - Fast Uncertainty Estimates in Deep Learning Interatomic Potentials [0.0]
We propose a method to estimate the predictive uncertainty based on a single neural network without the need for an ensemble.
We demonstrate that the quality of the uncertainty estimates matches those obtained from deep ensembles.
arXiv Detail & Related papers (2022-11-17T20:13:39Z) - Comparison of Uncertainty Quantification with Deep Learning in Time
Series Regression [7.6146285961466]
In this paper, different uncertainty estimation methods are compared to forecast meteorological time series data.
Results show how each uncertainty estimation method performs on the forecasting task.
arXiv Detail & Related papers (2022-11-11T14:29:13Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Adversarial Attack for Uncertainty Estimation: Identifying Critical
Regions in Neural Networks [0.0]
We propose a novel method to capture data points near decision boundary in neural network that are often referred to a specific type of uncertainty.
Uncertainty estimates are derived from the input perturbations, unlike previous studies that provide perturbations on the model's parameters.
We show that the proposed method has revealed a significant outperformance over other methods and provided less risk to capture model uncertainty in machine learning.
arXiv Detail & Related papers (2021-07-15T21:30:26Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.