The Unreasonable Effectiveness of Deep Evidential Regression
- URL: http://arxiv.org/abs/2205.10060v3
- Date: Sun, 16 Jul 2023 14:22:21 GMT
- Title: The Unreasonable Effectiveness of Deep Evidential Regression
- Authors: Nis Meinert, Jakob Gawlikowski, Alexander Lavin
- Abstract summary: A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
- Score: 72.30888739450343
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a significant need for principled uncertainty reasoning in machine
learning systems as they are increasingly deployed in safety-critical domains.
A new approach with uncertainty-aware regression-based neural networks (NNs),
based on learning evidential distributions for aleatoric and epistemic
uncertainties, shows promise over traditional deterministic methods and typical
Bayesian NNs, notably with the capabilities to disentangle aleatoric and
epistemic uncertainties. Despite some empirical success of Deep Evidential
Regression (DER), there are important gaps in the mathematical foundation that
raise the question of why the proposed technique seemingly works. We detail the
theoretical shortcomings and analyze the performance on synthetic and
real-world data sets, showing that Deep Evidential Regression is a heuristic
rather than an exact uncertainty quantification. We go on to discuss
corrections and redefinitions of how aleatoric and epistemic uncertainties
should be extracted from NNs.
Related papers
- Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Echoes of Socratic Doubt: Embracing Uncertainty in Calibrated Evidential Reinforcement Learning [1.7898305876314982]
The proposed algorithm combines deep evidential learning with quantile calibration based on principles of conformal inference.
It is tested on a suite of miniaturized Atari games (i.e., MinAtar)
arXiv Detail & Related papers (2024-02-11T05:17:56Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation
for Earth System Science Applications [0.32302664881848275]
Evidential deep learning is a technique that extends parametric deep learning to higher-order distributions.
This study compares the uncertainty derived from evidential neural networks to those obtained from ensembles.
We show evidential deep learning models attaining predictive accuracy rivaling standard methods, while robustly quantifying both sources of uncertainty.
arXiv Detail & Related papers (2023-09-22T23:04:51Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Accurate and Reliable Forecasting using Stochastic Differential
Equations [48.21369419647511]
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
This paper develops SDE-HNN to characterize the interaction between the predictive mean and variance of HNNs for accurate and reliable regression.
Experiments on the challenging datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both predictive performance and uncertainty quantification.
arXiv Detail & Related papers (2021-03-28T04:18:11Z) - Uncertainty Prediction for Deep Sequential Regression Using Meta Models [4.189643331553922]
This paper describes a flexible method that can generate symmetric and asymmetric uncertainty estimates.
It makes no assumptions about stationarity, and outperforms competitive baselines on both drift and non drift scenarios.
arXiv Detail & Related papers (2020-07-02T19:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.