Multi-level Monte Carlo Dropout for Efficient Uncertainty Quantification
- URL: http://arxiv.org/abs/2601.13272v1
- Date: Mon, 19 Jan 2026 18:17:25 GMT
- Title: Multi-level Monte Carlo Dropout for Efficient Uncertainty Quantification
- Authors: Aaron Pim, Tristan Pryer,
- Abstract summary: We develop multilevel Monte Carlo (MLMC) framework for uncertainty quantification.<n>Treat dropout masks as a source of epistemic randomness, we define a fidelity hierarchy by the number of forward passes used to estimate predictive moments.<n>We derive explicit bias, variance and effective cost expressions, together with sample-allocation rules across levels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop a multilevel Monte Carlo (MLMC) framework for uncertainty quantification with Monte Carlo dropout. Treating dropout masks as a source of epistemic randomness, we define a fidelity hierarchy by the number of stochastic forward passes used to estimate predictive moments. We construct coupled coarse--fine estimators by reusing dropout masks across fidelities, yielding telescoping MLMC estimators for both predictive means and predictive variances that remain unbiased for the corresponding dropout-induced quantities while reducing sampling variance at fixed evaluation budget. We derive explicit bias, variance and effective cost expressions, together with sample-allocation rules across levels. Numerical experiments on forward and inverse PINNs--Uzawa benchmarks confirm the predicted variance rates and demonstrate efficiency gains over single-level MC-dropout at matched cost.
Related papers
- Don't Throw Away Your Beams: Improving Consistency-based Uncertainties in LLMs via Beam Search [111.6996614063716]
We introduce a new family of methods that employ beam search to generate candidates for consistency-based uncertainty estimates.<n>We empirically evaluate our approach on six QA datasets and find that its consistent improvements over multinomial sampling lead to state-of-the-art UQ performance.
arXiv Detail & Related papers (2025-12-10T11:24:29Z) - Surrogate Modelling of Proton Dose with Monte Carlo Dropout Uncertainty Quantification [0.0]
We develop a neural surrogate that integrates Monte Carlo dropout to provide fast, differentiable dose predictions.<n>The approach achieves significant speedups over MC while retaining uncertainty information.<n>It is suitable for integration into robust planning, adaptive replanning and uncertainty-aware optimisation in proton therapy.
arXiv Detail & Related papers (2025-09-16T19:54:49Z) - Inference on covariance structure in high-dimensional multi-view data [8.549941732144035]
Posterior computation is conducted via expensive and brittle Markov chain Monte Carlo sampling or variational approximations.<n>Our proposed methodology employs spectral decompositions to estimate and align latent factors that are active in at least one view.<n>We show excellent performance in simulations, including accurate uncertainty, and apply the methodology to integrate four high-dimensional views from a multi-omics dataset of cancer cell samples.
arXiv Detail & Related papers (2025-09-02T19:20:42Z) - Conformal Sets in Multiple-Choice Question Answering under Black-Box Settings with Provable Coverage Guarantees [5.09580026885155]
We propose a frequency-based uncertainty quantification method under black-box settings.<n>Our approach involves multiple independent samplings of the model's output distribution for each input.<n>We show that frequency-based PE outperforms logit-based PE in distinguishing between correct and incorrect predictions.
arXiv Detail & Related papers (2025-08-07T16:22:49Z) - Simple Yet Effective: An Information-Theoretic Approach to Multi-LLM Uncertainty Quantification [9.397157329808254]
MUSE is a simple information-theoretic method to identify and aggregate well-calibrated subsets of large language models.<n> Experiments on binary prediction tasks demonstrate improved calibration and predictive performance compared to single-model and na"ive ensemble baselines.
arXiv Detail & Related papers (2025-07-09T19:13:25Z) - Targeted tuning of random forests for quantile estimation and prediction intervals [0.0]
We present a novel tuning procedure for random forests (RFs) that improves the accuracy of estimated quantiles.<n>We show that QCL tuning results in quantile estimates with more accurate coverage probabilities than those achieved using default parameter values.
arXiv Detail & Related papers (2025-07-02T07:32:59Z) - Prediction-Enhanced Monte Carlo: A Machine Learning View on Control Variate [8.80905230309764]
Prediction-Enhanced Monte Carlo (PEMC) is a framework that leverages modern ML models as predictors, using cheap and parallelizable simulation as features, to output unbiased evaluation with reduced variance and runtime.<n>PEMC consistently reduces variance while preserving unbiasedness, highlighting its potential as a powerful enhancement to standard Monte Carlo baselines.<n>We illustrate PEMC's broader efficacy and versatility through three examples: equity derivatives such as variance swaps under local volatility models; second, interest rate derivatives such as swaption pricing under the Heath-Jarrow-Morton (HJM) interest-rate model.
arXiv Detail & Related papers (2024-12-15T17:41:38Z) - Semiparametric conformal prediction [79.6147286161434]
We construct a conformal prediction set accounting for the joint correlation structure of the vector-valued non-conformity scores.<n>We flexibly estimate the joint cumulative distribution function (CDF) of the scores.<n>Our method yields desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods [59.779795063072655]
Chain-of-Thought (CoT) prompting and its variants have gained popularity as effective methods for solving multi-step reasoning problems.
We analyze CoT prompting from a statistical estimation perspective, providing a comprehensive characterization of its sample complexity.
arXiv Detail & Related papers (2024-08-25T04:07:18Z) - RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and
Out Distribution Robustness [94.69774317059122]
We show that the effectiveness of the well celebrated Mixup can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss.
This simple change not only provides much improved accuracy but also significantly improves the quality of the predictive uncertainty estimation of Mixup.
arXiv Detail & Related papers (2022-06-29T09:44:33Z) - Batch Stationary Distribution Estimation [98.18201132095066]
We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.
We propose a consistent estimator that is based on recovering a correction ratio function over the given data.
arXiv Detail & Related papers (2020-03-02T09:10:01Z) - Estimating Gradients for Discrete Random Variables by Sampling without
Replacement [93.09326095997336]
We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement.
We show that our estimator can be derived as the Rao-Blackwellization of three different estimators.
arXiv Detail & Related papers (2020-02-14T14:15:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.