Multivariate Deep Evidential Regression
- URL: http://arxiv.org/abs/2104.06135v2
- Date: Thu, 15 Apr 2021 12:47:38 GMT
- Title: Multivariate Deep Evidential Regression
- Authors: Nis Meinert and Alexander Lavin
- Abstract summary: A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is significant need for principled uncertainty reasoning in machine
learning systems as they are increasingly deployed in safety-critical domains.
A new approach with uncertainty-aware neural networks shows promise over
traditional deterministic methods, yet several important gaps in the theory and
implementation of these networks remain. We discuss three issues with a
proposed solution to extract aleatoric and epistemic uncertainties from
regression-based neural networks. The aforementioned proposal derives a
technique by placing evidential priors over the original Gaussian likelihood
function and training the neural network to infer the hyperparemters of the
evidential distribution. Doing so allows for the simultaneous extraction of
both uncertainties without sampling or utilization of out-of-distribution data
for univariate regression tasks. We describe the outstanding issues in detail,
provide a possible solution, and generalize the technique for the multivariate
case.
Related papers
- An Analytic Solution to Covariance Propagation in Neural Networks [10.013553984400488]
This paper presents a sample-free moment propagation technique to accurately characterize the input-output distributions of neural networks.
A key enabler of our technique is an analytic solution for the covariance of random variables passed through nonlinear activation functions.
The wide applicability and merits of the proposed technique are shown in experiments analyzing the input-output distributions of trained neural networks and training Bayesian neural networks.
arXiv Detail & Related papers (2024-03-24T14:08:24Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Uncertainty Propagation through Trained Deep Neural Networks Using
Factor Graphs [4.704825771757308]
Uncertainty propagation seeks to estimate aleatoric uncertainty by propagating input uncertainties to network predictions.
Motivated by the complex information flows within deep neural networks, we developed a novel approach by posing uncertainty propagation as a non-linear optimization problem using factor graphs.
arXiv Detail & Related papers (2023-12-10T17:26:27Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Efficient Variational Inference for Sparse Deep Learning with
Theoretical Guarantee [20.294908538266867]
Sparse deep learning aims to address the challenge of huge storage consumption by deep neural networks.
In this paper, we train sparse deep neural networks with a fully Bayesian treatment under spike-and-slab priors.
We develop a set of computationally efficient variational inferences via continuous relaxation of Bernoulli distribution.
arXiv Detail & Related papers (2020-11-15T03:27:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.