Can Bayesian Neural Networks Explicitly Model Input Uncertainty?
- URL: http://arxiv.org/abs/2501.08285v1
- Date: Tue, 14 Jan 2025 18:00:41 GMT
- Title: Can Bayesian Neural Networks Explicitly Model Input Uncertainty?
- Authors: Matias Valdenegro-Toro, Marco Zullich,
- Abstract summary: We build a two input Bayesian Neural Network (mean and standard deviation) and evaluate its capabilities for input uncertainty estimation.
Our results indicate that only some uncertainty estimation methods for approximate Bayesian NNs can model input uncertainty, in particular Ensembles and Flipout.
- Score: 6.9060054915724
- License:
- Abstract: Inputs to machine learning models can have associated noise or uncertainties, but they are often ignored and not modelled. It is unknown if Bayesian Neural Networks and their approximations are able to consider uncertainty in their inputs. In this paper we build a two input Bayesian Neural Network (mean and standard deviation) and evaluate its capabilities for input uncertainty estimation across different methods like Ensembles, MC-Dropout, and Flipout. Our results indicate that only some uncertainty estimation methods for approximate Bayesian NNs can model input uncertainty, in particular Ensembles and Flipout.
Related papers
- Unified Uncertainties: Combining Input, Data and Model Uncertainty into a Single Formulation [6.144680854063938]
We propose a method for propagating uncertainty in the inputs through a Neural Network.
Our results show that this propagation of input uncertainty results in a more stable decision boundary.
We discuss and demonstrate that input uncertainty, when propagated through the model, results in model uncertainty at the outputs.
arXiv Detail & Related papers (2024-06-26T23:13:45Z) - Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks [101.56637264703058]
We show that a variational Bayesian neural network approach can be used to improve uncertainty estimates.
We propose a new measure of uncertainty for contrastive learning, that is based on the disagreement in likelihood due to different positive samples.
arXiv Detail & Related papers (2023-11-30T22:32:24Z) - BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen
Neural Networks [50.15201777970128]
We propose BayesCap that learns a Bayesian identity mapping for the frozen model, allowing uncertainty estimation.
BayesCap is a memory-efficient method that can be trained on a small fraction of the original dataset.
We show the efficacy of our method on a wide variety of tasks with a diverse set of architectures.
arXiv Detail & Related papers (2022-07-14T12:50:09Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Robust uncertainty estimates with out-of-distribution pseudo-inputs
training [0.0]
We propose to explicitly train the uncertainty predictor where we are not given data to make it reliable.
As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space.
With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining state-of-the-art performance on diverse tasks.
arXiv Detail & Related papers (2022-01-15T17:15:07Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Understanding Uncertainty in Bayesian Deep Learning [0.0]
We show that traditional training procedures for NLMs can drastically underestimate uncertainty in data-scarce regions.
We propose a novel training method that can both capture useful predictive uncertainties as well as allow for incorporation of domain knowledge.
arXiv Detail & Related papers (2021-05-21T19:22:17Z) - Getting a CLUE: A Method for Explaining Uncertainty Estimates [30.367995696223726]
We propose a novel method for interpreting uncertainty estimates from differentiable probabilistic models.
Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold.
arXiv Detail & Related papers (2020-06-11T21:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.