Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation
for Earth System Science Applications
- URL: http://arxiv.org/abs/2309.13207v2
- Date: Mon, 19 Feb 2024 22:01:12 GMT
- Title: Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation
for Earth System Science Applications
- Authors: John S. Schreck, David John Gagne II, Charlie Becker, William E.
Chapman, Kim Elmore, Da Fan, Gabrielle Gantos, Eliot Kim, Dhamma Kimpara,
Thomas Martin, Maria J. Molina, Vanessa M. Pryzbylo, Jacob Radford, Belen
Saavedra, Justin Willson, Christopher Wirz
- Abstract summary: Evidential deep learning is a technique that extends parametric deep learning to higher-order distributions.
This study compares the uncertainty derived from evidential neural networks to those obtained from ensembles.
We show evidential deep learning models attaining predictive accuracy rivaling standard methods, while robustly quantifying both sources of uncertainty.
- Score: 0.32302664881848275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust quantification of predictive uncertainty is critical for understanding
factors that drive weather and climate outcomes. Ensembles provide predictive
uncertainty estimates and can be decomposed physically, but both physics and
machine learning ensembles are computationally expensive. Parametric deep
learning can estimate uncertainty with one model by predicting the parameters
of a probability distribution but do not account for epistemic uncertainty..
Evidential deep learning, a technique that extends parametric deep learning to
higher-order distributions, can account for both aleatoric and epistemic
uncertainty with one model. This study compares the uncertainty derived from
evidential neural networks to those obtained from ensembles. Through
applications of classification of winter precipitation type and regression of
surface layer fluxes, we show evidential deep learning models attaining
predictive accuracy rivaling standard methods, while robustly quantifying both
sources of uncertainty. We evaluate the uncertainty in terms of how well the
predictions are calibrated and how well the uncertainty correlates with
prediction error. Analyses of uncertainty in the context of the inputs reveal
sensitivities to underlying meteorological processes, facilitating
interpretation of the models. The conceptual simplicity, interpretability, and
computational efficiency of evidential neural networks make them highly
extensible, offering a promising approach for reliable and practical
uncertainty quantification in Earth system science modeling. In order to
encourage broader adoption of evidential deep learning in Earth System Science,
we have developed a new Python package, MILES-GUESS
(https://github.com/ai2es/miles-guess), that enables users to train and
evaluate both evidential and ensemble deep learning.
Related papers
- Deep Modeling of Non-Gaussian Aleatoric Uncertainty [4.969887562291159]
Deep learning offers promising new ways to accurately model aleatoric uncertainty in robotic estimation systems.
In this study, we formulate and evaluate three fundamental deep learning approaches for conditional probability density modeling.
Our results show that these deep learning methods can accurately capture complex uncertainty patterns, highlighting their potential for improving the reliability and robustness of estimation systems.
arXiv Detail & Related papers (2024-05-30T22:13:17Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - Fast Uncertainty Estimates in Deep Learning Interatomic Potentials [0.0]
We propose a method to estimate the predictive uncertainty based on a single neural network without the need for an ensemble.
We demonstrate that the quality of the uncertainty estimates matches those obtained from deep ensembles.
arXiv Detail & Related papers (2022-11-17T20:13:39Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
Higher-Order Influence Functions [121.10450359856242]
We develop a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals.
The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy.
arXiv Detail & Related papers (2020-06-29T13:36:52Z) - Deep Bayesian Gaussian Processes for Uncertainty Estimation in
Electronic Health Records [30.65770563934045]
We merge features of the deep Bayesian learning framework with deep kernel learning to leverage the strengths of both methods for more comprehensive uncertainty estimation.
We show that our method is less susceptible to making overconfident predictions, especially for the minority class in imbalanced datasets.
arXiv Detail & Related papers (2020-03-23T10:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.