Know Where To Drop Your Weights: Towards Faster Uncertainty Estimation
- URL: http://arxiv.org/abs/2010.14019v1
- Date: Tue, 27 Oct 2020 02:56:27 GMT
- Title: Know Where To Drop Your Weights: Towards Faster Uncertainty Estimation
- Authors: Akshatha Kamath and Dwaraknath Gnaneshwar and Matias Valdenegro-Toro
- Abstract summary: Estimating uncertainty of models used in low-latency applications is a challenge due to the computationally demanding nature of uncertainty estimation techniques.
We propose Select-DC which uses a subset of layers in a neural network to model uncertainty with MCDC.
We show a significant reduction in the GFLOPS required to model uncertainty, compared to Monte Carlo DropConnect, with marginal trade-off in performance.
- Score: 7.605814048051737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating epistemic uncertainty of models used in low-latency applications
and Out-Of-Distribution samples detection is a challenge due to the
computationally demanding nature of uncertainty estimation techniques.
Estimating model uncertainty using approximation techniques like Monte Carlo
Dropout (MCD), DropConnect (MCDC) requires a large number of forward passes
through the network, rendering them inapt for low-latency applications. We
propose Select-DC which uses a subset of layers in a neural network to model
epistemic uncertainty with MCDC. Through our experiments, we show a significant
reduction in the GFLOPS required to model uncertainty, compared to Monte Carlo
DropConnect, with marginal trade-off in performance. We perform a suite of
experiments on CIFAR 10, CIFAR 100, and SVHN datasets with ResNet and VGG
models. We further show how applying DropConnect to various layers in the
network with different drop probabilities affects the networks performance and
the entropy of the predictive distribution.
Related papers
- Collapsed Inference for Bayesian Deep Learning [36.1725075097107]
We introduce a novel collapsed inference scheme that performs Bayesian model averaging using collapsed samples.
A collapsed sample represents uncountably many models drawn from the approximate posterior.
Our proposed use of collapsed samples achieves a balance between scalability and accuracy.
arXiv Detail & Related papers (2023-06-16T08:34:42Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Controlled Dropout for Uncertainty Estimation [11.225333867982359]
Uncertainty in a neural network is one of the most discussed topics for safety-critical applications.
We present a new version of the traditional dropout layer where we are able to fix the number of dropout configurations.
arXiv Detail & Related papers (2022-05-06T09:48:11Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Calibration and Uncertainty Quantification of Bayesian Convolutional
Neural Networks for Geophysical Applications [0.0]
It is common to incorporate the uncertainty of predictions such subsurface models should provide calibrated probabilities and the associated uncertainties in their predictions.
It has been shown that popular Deep Learning-based models are often miscalibrated, and due to their deterministic nature, provide no means to interpret the uncertainty of their predictions.
We compare three different approaches obtaining probabilistic models based on convolutional neural networks in a Bayesian formalism.
arXiv Detail & Related papers (2021-05-25T17:54:23Z) - Contextual Dropout: An Efficient Sample-Dependent Dropout Module [60.63525456640462]
Dropout has been demonstrated as a simple and effective module to regularize the training process of deep neural networks.
We propose contextual dropout with an efficient structural design as a simple and scalable sample-dependent dropout module.
Our experimental results show that the proposed method outperforms baseline methods in terms of both accuracy and quality of uncertainty estimation.
arXiv Detail & Related papers (2021-03-06T19:30:32Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.