Unreliable Uncertainty Estimates with Monte Carlo Dropout
- URL: http://arxiv.org/abs/2512.14851v1
- Date: Tue, 16 Dec 2025 19:14:57 GMT
- Title: Unreliable Uncertainty Estimates with Monte Carlo Dropout
- Authors: Aslak Djupskås, Alexander Johannes Stasik, Signe Riemer-Sørensen,
- Abstract summary: Monte Carlo dropout (MCD) was proposed as an efficient approximation to Bayesian inference in deep learning.<n>We empirically investigate its ability to capture true uncertainty.<n>We find that MCD struggles to accurately reflect the underlying true uncertainty.
- Score: 42.50242605797505
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Reliable uncertainty estimation is crucial for machine learning models, especially in safety-critical domains. While exact Bayesian inference offers a principled approach, it is often computationally infeasible for deep neural networks. Monte Carlo dropout (MCD) was proposed as an efficient approximation to Bayesian inference in deep learning by applying neuron dropout at inference time \citep{gal2016dropout}. Hence, the method generates multiple sub-models yielding a distribution of predictions to estimate uncertainty. We empirically investigate its ability to capture true uncertainty and compare to Gaussian Processes (GP) and Bayesian Neural Networks (BNN). We find that MCD struggles to accurately reflect the underlying true uncertainty, particularly failing to capture increased uncertainty in extrapolation and interpolation regions as observed in Bayesian models. The findings suggest that uncertainty estimates from MCD, as implemented and evaluated in these experiments, is not as reliable as those from traditional Bayesian approaches for capturing epistemic and aleatoric uncertainty.
Related papers
- Uncertainty Quantification for Deep Regression using Contextualised Normalizing Flows [1.8899300124593648]
We introduce MCNF, a novel uncertainty quantification method that produces both prediction intervals and the full conditioned predictive distribution.<n>MCNF operates on top of the underlying trained predictive model; thus, no predictive model retraining is needed.<n>We provide experimental evidence that the MCNF-based uncertainty estimate is well calibrated, is competitive with state-of-the-art uncertainty quantification methods, and provides richer information for downstream decision-making tasks.
arXiv Detail & Related papers (2025-11-30T11:08:40Z) - Cooperative Bayesian and variance networks disentangle aleatoric and epistemic uncertainties [0.0]
Real-world data contains aleatoric uncertainty - irreducible noise arising from imperfect measurements or from incomplete knowledge about the data generation process.<n>Mean variance estimation (MVE) networks can learn this type of uncertainty but require ad-hoc regularization strategies to avoid overfitting.<n>We propose to train a variance network with a Bayesian neural network and demonstrate that the resulting model disentangles aleatoric and epistemic uncertainties while improving the mean estimation.
arXiv Detail & Related papers (2025-05-05T15:50:52Z) - Bayesian meta learning for trustworthy uncertainty quantification [3.683202928838613]
We propose, Trust-Bayes, a novel optimization framework for Bayesian meta learning.
We characterize the lower bounds of the probabilities of the ground truth being captured by the specified intervals.
We analyze the sample complexity with respect to the feasible probability for trustworthy uncertainty quantification.
arXiv Detail & Related papers (2024-07-27T15:56:12Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Integrating Uncertainty into Neural Network-based Speech Enhancement [27.868722093985006]
Supervised masking approaches in the time-frequency domain aim to employ deep neural networks to estimate a multiplicative mask to extract clean speech.
This leads to a single estimate for each input without any guarantees or measures of reliability.
We study the benefits of modeling uncertainty in clean speech estimation.
arXiv Detail & Related papers (2023-05-15T15:55:12Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Deep Bayesian Gaussian Processes for Uncertainty Estimation in
Electronic Health Records [30.65770563934045]
We merge features of the deep Bayesian learning framework with deep kernel learning to leverage the strengths of both methods for more comprehensive uncertainty estimation.
We show that our method is less susceptible to making overconfident predictions, especially for the minority class in imbalanced datasets.
arXiv Detail & Related papers (2020-03-23T10:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.