Towards Reliable Uncertainty Quantification via Deep Ensembles in
Multi-output Regression Task
- URL: http://arxiv.org/abs/2303.16210v4
- Date: Fri, 24 Nov 2023 03:56:54 GMT
- Title: Towards Reliable Uncertainty Quantification via Deep Ensembles in
Multi-output Regression Task
- Authors: Sunwoong Yang, Kwanjung Yee
- Abstract summary: This study aims to investigate the deep ensemble approach, an approximate Bayesian inference, in the multi-output regression task.
A trend towards underestimation of uncertainty as it increases is observed for the first time.
We propose the deep ensemble framework that applies the post-hoc calibration method to improve its uncertainty quantification performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study aims to comprehensively investigate the deep ensemble approach, an
approximate Bayesian inference, in the multi-output regression task for
predicting the aerodynamic performance of a missile configuration. To this end,
the effect of the number of neural networks used in the ensemble, which has
been blindly adopted in previous studies, is scrutinized. As a result, an
obvious trend towards underestimation of uncertainty as it increases is
observed for the first time, and in this context, we propose the deep ensemble
framework that applies the post-hoc calibration method to improve its
uncertainty quantification performance. It is compared with Gaussian process
regression and is shown to have superior performance in terms of regression
accuracy ($\uparrow55\sim56\%$), reliability of estimated uncertainty
($\uparrow38\sim77\%$), and training efficiency ($\uparrow78\%$). Finally, the
potential impact of the suggested framework on the Bayesian optimization is
briefly examined, indicating that deep ensemble without calibration may lead to
unintended exploratory behavior. This UQ framework can be seamlessly applied
and extended to any regression task, as no special assumptions have been made
for the specific problem used in this study.
Related papers
- Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Discretization-Induced Dirichlet Posterior for Robust Uncertainty
Quantification on Regression [17.49026509916207]
Uncertainty quantification is critical for deploying deep neural networks (DNNs) in real-world applications.
For vision regression tasks, current AuxUE designs are mainly adopted for aleatoric uncertainty estimates.
We propose a generalized AuxUE scheme for more robust uncertainty quantification on regression tasks.
arXiv Detail & Related papers (2023-08-17T15:54:11Z) - Deep Anti-Regularized Ensembles provide reliable out-of-distribution
uncertainty quantification [4.750521042508541]
Deep ensemble often return overconfident estimates outside the training domain.
We show that an ensemble of networks with large weights fitting the training data are likely to meet these two objectives.
We derive a theoretical framework for this approach and show that the proposed optimization can be seen as a "water-filling" problem.
arXiv Detail & Related papers (2023-04-08T15:25:12Z) - Toward Robust Uncertainty Estimation with Random Activation Functions [3.0586855806896045]
We propose a novel approach for uncertainty quantification via ensembles, called Random Activation Functions (RAFs) Ensemble.
RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-02-28T13:17:56Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Sample Efficient Deep Reinforcement Learning via Uncertainty Estimation [12.415463205960156]
In model-free deep reinforcement learning (RL) algorithms, using noisy value estimates to supervise policy evaluation and optimization is detrimental to the sample efficiency.
We provide a systematic analysis of the sources of uncertainty in the noisy supervision that occurs in RL.
We propose a method whereby two complementary uncertainty estimation methods account for both the Q-value and the environmentity to better mitigate the negative impacts of noisy supervision.
arXiv Detail & Related papers (2022-01-05T15:46:06Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Uncertainty Prediction for Deep Sequential Regression Using Meta Models [4.189643331553922]
This paper describes a flexible method that can generate symmetric and asymmetric uncertainty estimates.
It makes no assumptions about stationarity, and outperforms competitive baselines on both drift and non drift scenarios.
arXiv Detail & Related papers (2020-07-02T19:27:17Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.