Controlled Dropout for Uncertainty Estimation
- URL: http://arxiv.org/abs/2205.03109v1
- Date: Fri, 6 May 2022 09:48:11 GMT
- Title: Controlled Dropout for Uncertainty Estimation
- Authors: Mehedi Hasan, Abbas Khosravi, Ibrahim Hossain, Ashikur Rahman and
Saeid Nahavandi
- Abstract summary: Uncertainty in a neural network is one of the most discussed topics for safety-critical applications.
We present a new version of the traditional dropout layer where we are able to fix the number of dropout configurations.
- Score: 11.225333867982359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification in a neural network is one of the most discussed
topics for safety-critical applications. Though Neural Networks (NNs) have
achieved state-of-the-art performance for many applications, they still provide
unreliable point predictions, which lack information about uncertainty
estimates. Among various methods to enable neural networks to estimate
uncertainty, Monte Carlo (MC) dropout has gained much popularity in a short
period due to its simplicity. In this study, we present a new version of the
traditional dropout layer where we are able to fix the number of dropout
configurations. As such, each layer can take and apply the new dropout layer in
the MC method to quantify the uncertainty associated with NN predictions. We
conduct experiments on both toy and realistic datasets and compare the results
with the MC method using the traditional dropout layer. Performance analysis
utilizing uncertainty evaluation metrics corroborates that our dropout layer
offers better performance in most cases.
Related papers
- Improvements on Uncertainty Quantification for Node Classification via
Distance-Based Regularization [4.121906004284458]
Deep neural networks have achieved significant success in the last decades, but they are not well-calibrated and often produce unreliable predictions.
We propose a distance-based regularization that encourages clustered OOD nodes to remain clustered in the latent space.
We conduct extensive comparison experiments on eight standard datasets and demonstrate that the proposed regularization outperforms the state-of-the-art in both OOD detection and misclassification detection.
arXiv Detail & Related papers (2023-11-10T00:00:20Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Single-shot Bayesian approximation for neural networks [0.0]
Deep neural networks (NNs) are known for their high-prediction performances.
NNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty.
We present a single-shot MC dropout approximation that preserves the advantages of BNNs while being as fast as NNs.
arXiv Detail & Related papers (2023-08-24T13:40:36Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Single Model Uncertainty Estimation via Stochastic Data Centering [39.71621297447397]
We are interested in estimating the uncertainties of deep neural networks.
We present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models.
We show that $Delta-$UQ's uncertainty estimates are superior to many of the current methods on a variety of benchmarks.
arXiv Detail & Related papers (2022-07-14T23:54:54Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Contextual Dropout: An Efficient Sample-Dependent Dropout Module [60.63525456640462]
Dropout has been demonstrated as a simple and effective module to regularize the training process of deep neural networks.
We propose contextual dropout with an efficient structural design as a simple and scalable sample-dependent dropout module.
Our experimental results show that the proposed method outperforms baseline methods in terms of both accuracy and quality of uncertainty estimation.
arXiv Detail & Related papers (2021-03-06T19:30:32Z) - A Bayesian Neural Network based on Dropout Regulation [0.0]
We present "Dropout Regulation" (DR), which consists of automatically adjusting the dropout rate during training using a controller as used in automation.
DR allows for a precise estimation of the uncertainty which is comparable to the state-of-the-art while remaining simple to implement.
arXiv Detail & Related papers (2021-02-03T09:39:50Z) - Know Where To Drop Your Weights: Towards Faster Uncertainty Estimation [7.605814048051737]
Estimating uncertainty of models used in low-latency applications is a challenge due to the computationally demanding nature of uncertainty estimation techniques.
We propose Select-DC which uses a subset of layers in a neural network to model uncertainty with MCDC.
We show a significant reduction in the GFLOPS required to model uncertainty, compared to Monte Carlo DropConnect, with marginal trade-off in performance.
arXiv Detail & Related papers (2020-10-27T02:56:27Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.