Toward Robust Uncertainty Estimation with Random Activation Functions
- URL: http://arxiv.org/abs/2302.14552v1
- Date: Tue, 28 Feb 2023 13:17:56 GMT
- Title: Toward Robust Uncertainty Estimation with Random Activation Functions
- Authors: Yana Stoyanova, Soroush Ghandi, Maryam Tavakol
- Abstract summary: We propose a novel approach for uncertainty quantification via ensembles, called Random Activation Functions (RAFs) Ensemble.
RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets.
- Score: 3.0586855806896045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are in the limelight of machine learning with their
excellent performance in many data-driven applications. However, they can lead
to inaccurate predictions when queried in out-of-distribution data points,
which can have detrimental effects especially in sensitive domains, such as
healthcare and transportation, where erroneous predictions can be very costly
and/or dangerous. Subsequently, quantifying the uncertainty of the output of a
neural network is often leveraged to evaluate the confidence of its
predictions, and ensemble models have proved to be effective in measuring the
uncertainty by utilizing the variance of predictions over a pool of models. In
this paper, we propose a novel approach for uncertainty quantification via
ensembles, called Random Activation Functions (RAFs) Ensemble, that aims at
improving the ensemble diversity toward a more robust estimation, by
accommodating each neural network with a different (random) activation
function. Extensive empirical study demonstrates that RAFs Ensemble outperforms
state-of-the-art ensemble uncertainty quantification methods on both synthetic
and real-world datasets in a series of regression tasks.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Looking at the posterior: accuracy and uncertainty of neural-network
predictions [0.0]
We show that prediction accuracy depends on both epistemic and aleatoric uncertainty.
We introduce a novel acquisition function that outperforms common uncertainty-based methods.
arXiv Detail & Related papers (2022-11-26T16:13:32Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Evaluating Predictive Distributions: Does Bayesian Deep Learning Work? [45.290773422944866]
Posterior predictive distributions quantify uncertainties ignored by point estimates.
This paper introduces textitThe Neural Testbed, which provides tools for the systematic evaluation of agents that generate such predictions.
arXiv Detail & Related papers (2021-10-09T18:54:02Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Cross-Validation and Uncertainty Determination for Randomized Neural
Networks with Applications to Mobile Sensors [0.0]
Extreme learning machines provide an attractive and efficient method for supervised learning under limited computing ressources and green machine learning.
Results are discussed about supervised learning with such networks and regression methods in terms of consistency and bounds for the generalization and prediction error.
arXiv Detail & Related papers (2021-01-06T12:28:06Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.