When three experiments are better than two: Avoiding intractable correlated aleatoric uncertainty by leveraging a novel bias--variance tradeoff
- URL: http://arxiv.org/abs/2509.04363v1
- Date: Thu, 04 Sep 2025 16:23:54 GMT
- Title: When three experiments are better than two: Avoiding intractable correlated aleatoric uncertainty by leveraging a novel bias--variance tradeoff
- Authors: Paul Scherer, Andreas Kirsch, Jake P. Taylor-King,
- Abstract summary: Real-world experimental scenarios are characterized by the presence of heteroskedastic aleatoric uncertainty.<n>We propose novel active learning strategies that directly reduce the bias between experimental rounds.
- Score: 1.1609229408259252
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Real-world experimental scenarios are characterized by the presence of heteroskedastic aleatoric uncertainty, and this uncertainty can be correlated in batched settings. The bias--variance tradeoff can be used to write the expected mean squared error between a model distribution and a ground-truth random variable as the sum of an epistemic uncertainty term, the bias squared, and an aleatoric uncertainty term. We leverage this relationship to propose novel active learning strategies that directly reduce the bias between experimental rounds, considering model systems both with and without noise. Finally, we investigate methods to leverage historical data in a quadratic manner through the use of a novel cobias--covariance relationship, which naturally proposes a mechanism for batching through an eigendecomposition strategy. When our difference-based method leveraging the cobias--covariance relationship is utilized in a batched setting (with a quadratic estimator), we outperform a number of canonical methods including BALD and Least Confidence.
Related papers
- Uncertainty Estimation using Variance-Gated Distributions [0.6340400318304492]
We propose an intuitive framework for uncertainty estimation and decomposition based on the signal-to-noise ratio of class probability distributions.<n>We introduce a variance-gated measure that scales predictions by a confidence factor derived from ensembles.
arXiv Detail & Related papers (2025-09-07T16:19:21Z) - Cooperative Bayesian and variance networks disentangle aleatoric and epistemic uncertainties [0.0]
Real-world data contains aleatoric uncertainty - irreducible noise arising from imperfect measurements or from incomplete knowledge about the data generation process.<n>Mean variance estimation (MVE) networks can learn this type of uncertainty but require ad-hoc regularization strategies to avoid overfitting.<n>We propose to train a variance network with a Bayesian neural network and demonstrate that the resulting model disentangles aleatoric and epistemic uncertainties while improving the mean estimation.
arXiv Detail & Related papers (2025-05-05T15:50:52Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Learning Correspondence Uncertainty via Differentiable Nonlinear Least
Squares [47.83169780113135]
We propose a differentiable nonlinear least squares framework to account for uncertainty in relative pose estimation from feature correspondences.
We evaluate our approach on synthetic, as well as the KITTI and EuRoC real-world datasets.
arXiv Detail & Related papers (2023-05-16T15:21:09Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z) - On lower bounds for the bias-variance trade-off [0.0]
It is a common phenomenon that for high-dimensional statistical models, rate-optimal estimators balance squared bias and variance.
We propose a general strategy to obtain lower bounds on the variance of any estimator with bias smaller than a prespecified bound.
This shows to which extent the bias-variance trade-off is unavoidable and allows to quantify the loss of performance for methods that do not obey it.
arXiv Detail & Related papers (2020-05-30T14:07:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.