Reducing Estimation Uncertainty Using Normalizing Flows and Stratification
- URL: http://arxiv.org/abs/2602.10706v3
- Date: Mon, 16 Feb 2026 21:31:37 GMT
- Title: Reducing Estimation Uncertainty Using Normalizing Flows and Stratification
- Authors: Paweł Lorek, Rafał Nowak, Rafał Topolnicki, Tomasz Trzciński, Maciej Zięba, Aleksandra Krystecka,
- Abstract summary: Current methodologies assume (semi-) distributions such as Gaussian or mixed Gaussian, leading to significant estimation uncertainty if these assumptions do not hold.<n>We propose a flow-based model, integrated with stratified sampling, that leverages a parametrized neural network to offer greater flexibility in modeling unknown data distributions.<n>Our model shows a marked reduction in estimation uncertainty across multiple datasets, including high-dimensional (30 and 128) ones.
- Score: 35.18016233072556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating the expectation of a real-valued function of a random variable from sample data is a critical aspect of statistical analysis, with far-reaching implications in various applications. Current methodologies typically assume (semi-)parametric distributions such as Gaussian or mixed Gaussian, leading to significant estimation uncertainty if these assumptions do not hold. We propose a flow-based model, integrated with stratified sampling, that leverages a parametrized neural network to offer greater flexibility in modeling unknown data distributions, thereby mitigating this limitation. Our model shows a marked reduction in estimation uncertainty across multiple datasets, including high-dimensional (30 and 128) ones, outperforming crude Monte Carlo estimators and Gaussian mixture models. Reproducible code is available at https://github.com/rnoxy/flowstrat.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.<n>We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Partially factorized variational inference for high-dimensional mixed models [0.0]
Variational inference is a popular way to perform such computations, especially in the Bayesian context.<n>We show that standard mean-field variational inference dramatically underestimates posterior uncertainty in high-dimensions.<n>We then show how appropriately relaxing the mean-field assumption leads to methods whose uncertainty quantification does not deteriorate in high-dimensions.
arXiv Detail & Related papers (2023-12-20T16:12:37Z) - Uncertainty quantification and out-of-distribution detection using
surjective normalizing flows [46.51077762143714]
We propose a simple approach using surjective normalizing flows to identify out-of-distribution data sets in deep neural network models.
We show that our method can reliably discern out-of-distribution data from in-distribution data.
arXiv Detail & Related papers (2023-11-01T09:08:35Z) - Deep Evidential Learning for Bayesian Quantile Regression [3.6294895527930504]
It is desirable to have accurate uncertainty estimation from a single deterministic forward-pass model.
This paper proposes a deep Bayesian quantile regression model that can estimate the quantiles of a continuous target distribution without the Gaussian assumption.
arXiv Detail & Related papers (2023-08-21T11:42:16Z) - Estimating Regression Predictive Distributions with Sample Networks [17.935136717050543]
A common approach to model uncertainty is to choose a parametric distribution and fit the data to it using maximum likelihood estimation.
The chosen parametric form can be a poor fit to the data-generating distribution, resulting in unreliable uncertainty estimates.
We propose SampleNet, a flexible and scalable architecture for modeling uncertainty that avoids specifying a parametric form on the output distribution.
arXiv Detail & Related papers (2022-11-24T17:23:29Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Uncertainty Intervals for Graph-based Spatio-Temporal Traffic Prediction [0.0]
We propose a Spatio-Temporal neural network that is trained to estimate a density given the measurements of previous timesteps, conditioned on a quantile.
Our method of density estimation is fully parameterised by our neural network and does not use a likelihood approximation internally.
This approach produces uncertainty estimates without the need to sample during inference, such as in Monte Carlo Dropout.
arXiv Detail & Related papers (2020-12-09T18:02:26Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z) - Maximum likelihood estimation and uncertainty quantification for
Gaussian process approximation of deterministic functions [10.319367855067476]
This article provides one of the first theoretical analyses in the context of Gaussian process regression with a noiseless dataset.
We show that the maximum likelihood estimation of the scale parameter alone provides significant adaptation against misspecification of the Gaussian process model.
arXiv Detail & Related papers (2020-01-29T17:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.