Depth Uncertainty in Neural Networks
- URL: http://arxiv.org/abs/2006.08437v3
- Date: Mon, 7 Dec 2020 16:53:58 GMT
- Title: Depth Uncertainty in Neural Networks
- Authors: Javier Antor\'an, James Urquhart Allingham, Jos\'e Miguel
Hern\'andez-Lobato
- Abstract summary: Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes.
By exploiting the sequential structure of feed-forward networks, we are able to both evaluate our training objective and make predictions with a single forward pass.
We validate our approach on real-world regression and image classification tasks.
- Score: 2.6763498831034043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing methods for estimating uncertainty in deep learning tend to require
multiple forward passes, making them unsuitable for applications where
computational resources are limited. To solve this, we perform probabilistic
reasoning over the depth of neural networks. Different depths correspond to
subnetworks which share weights and whose predictions are combined via
marginalisation, yielding model uncertainty. By exploiting the sequential
structure of feed-forward networks, we are able to both evaluate our training
objective and make predictions with a single forward pass. We validate our
approach on real-world regression and image classification tasks. Our approach
provides uncertainty calibration, robustness to dataset shift, and accuracies
competitive with more computationally expensive baselines.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Fast Uncertainty Estimates in Deep Learning Interatomic Potentials [0.0]
We propose a method to estimate the predictive uncertainty based on a single neural network without the need for an ensemble.
We demonstrate that the quality of the uncertainty estimates matches those obtained from deep ensembles.
arXiv Detail & Related papers (2022-11-17T20:13:39Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Variational Bayes Deep Operator Network: A data-driven Bayesian solver
for parametric differential equations [0.0]
We propose Variational Bayes DeepONet (VB-DeepONet) for operator learning.
VB-DeepONet uses variational inference to take into account high dimensional posterior distributions.
arXiv Detail & Related papers (2022-06-12T04:20:11Z) - Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in
Deep Learning [24.3370326359959]
We propose to predict with a Gaussian mixture model posterior that consists of a weighted sum of Laplace approximations of independently trained deep neural networks.
We theoretically validate that our approach mitigates overconfidence "far away" from the training data and empirically compare against state-of-the-art baselines on standard uncertainty quantification benchmarks.
arXiv Detail & Related papers (2021-11-05T15:52:48Z) - Interpretable Social Anchors for Human Trajectory Forecasting in Crowds [84.20437268671733]
We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
arXiv Detail & Related papers (2021-05-07T09:22:34Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Uncertainty Quantification in Deep Residual Neural Networks [0.0]
Uncertainty quantification is an important and challenging problem in deep learning.
Previous methods rely on dropout layers which are not present in modern deep architectures or batch normalization which is sensitive to batch sizes.
We show that training residual networks using depth can be interpreted as a variational approximation to the posterior weights in neural networks.
arXiv Detail & Related papers (2020-07-09T16:05:37Z) - Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning
to End [18.49954482336334]
We focus on modeling the uncertainty of depth data in depth completion starting from the sparse noisy input all the way to the final prediction.
We propose a novel approach to identify disturbed measurements in the input by learning an input confidence estimator in a self-supervised manner based on the normalized convolutional neural networks (NCNNs)
When we evaluate our approach on the KITTI dataset for depth completion, we outperform all the existing Bayesian Deep Learning approaches in terms of prediction accuracy, quality of the uncertainty measure, and the computational efficiency.
arXiv Detail & Related papers (2020-06-05T10:18:35Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.