Uncertainty Quantification in Deep Residual Neural Networks
- URL: http://arxiv.org/abs/2007.04905v1
- Date: Thu, 9 Jul 2020 16:05:37 GMT
- Title: Uncertainty Quantification in Deep Residual Neural Networks
- Authors: Lukasz Wandzik, Raul Vicente Garcia, J\"org Kr\"uger
- Abstract summary: Uncertainty quantification is an important and challenging problem in deep learning.
Previous methods rely on dropout layers which are not present in modern deep architectures or batch normalization which is sensitive to batch sizes.
We show that training residual networks using depth can be interpreted as a variational approximation to the posterior weights in neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty quantification is an important and challenging problem in deep
learning. Previous methods rely on dropout layers which are not present in
modern deep architectures or batch normalization which is sensitive to batch
sizes. In this work, we address the problem of uncertainty quantification in
deep residual networks by using a regularization technique called stochastic
depth. We show that training residual networks using stochastic depth can be
interpreted as a variational approximation to the intractable posterior over
the weights in Bayesian neural networks. We demonstrate that by sampling from a
distribution of residual networks with varying depth and shared weights,
meaningful uncertainty estimates can be obtained. Moreover, compared to the
original formulation of residual networks, our method produces well-calibrated
softmax probabilities with only minor changes to the network's structure. We
evaluate our approach on popular computer vision datasets and measure the
quality of uncertainty estimates. We also test the robustness to domain shift
and show that our method is able to express higher predictive uncertainty on
out-of-distribution samples. Finally, we demonstrate how the proposed approach
could be used to obtain uncertainty estimates in facial verification
applications.
Related papers
- Cycle Consistency-based Uncertainty Quantification of Neural Networks in
Inverse Imaging Problems [10.992084413881592]
Uncertainty estimation is critical for numerous applications of deep neural networks.
We show an uncertainty quantification approach for deep neural networks used in inverse problems based on cycle consistency.
arXiv Detail & Related papers (2023-05-22T09:23:18Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Improved uncertainty quantification for neural networks with Bayesian
last layer [0.0]
Uncertainty quantification is an important task in machine learning.
We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation.
arXiv Detail & Related papers (2023-02-21T20:23:56Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Depth Uncertainty in Neural Networks [2.6763498831034043]
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes.
By exploiting the sequential structure of feed-forward networks, we are able to both evaluate our training objective and make predictions with a single forward pass.
We validate our approach on real-world regression and image classification tasks.
arXiv Detail & Related papers (2020-06-15T14:33:40Z) - Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning
to End [18.49954482336334]
We focus on modeling the uncertainty of depth data in depth completion starting from the sparse noisy input all the way to the final prediction.
We propose a novel approach to identify disturbed measurements in the input by learning an input confidence estimator in a self-supervised manner based on the normalized convolutional neural networks (NCNNs)
When we evaluate our approach on the KITTI dataset for depth completion, we outperform all the existing Bayesian Deep Learning approaches in terms of prediction accuracy, quality of the uncertainty measure, and the computational efficiency.
arXiv Detail & Related papers (2020-06-05T10:18:35Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z) - Variational Depth Search in ResNets [2.6763498831034043]
One-shot neural architecture search allows joint learning of weights and network architecture, reducing computational cost.
We limit our search space to the depth of residual networks and formulate an analytically tractable variational objective that allows for an unbiased approximate posterior over depths in one-shot.
We compare our proposed method against manual search over network depths on the MNIST, Fashion-MNIST, SVHN datasets.
arXiv Detail & Related papers (2020-02-06T16:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.