Chilled Sampling for Uncertainty Quantification: A Motivation From A
Meteorological Inverse Problem
- URL: http://arxiv.org/abs/2207.03182v3
- Date: Wed, 25 Oct 2023 08:05:48 GMT
- Title: Chilled Sampling for Uncertainty Quantification: A Motivation From A
Meteorological Inverse Problem
- Authors: Patrick H\'eas and Fr\'ed\'eric C\'erou and Mathias Rousset
- Abstract summary: We study the evaluation of the (expected) estimation errors using gradient-based Markov Chain Monte Carlo algorithms.
The main contribution is to propose a general strategy, called here chilling, which amounts to sampling a local approximation of the posterior distribution in the neighborhood of a point estimate.
- Score: 1.688134675717698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Atmospheric motion vectors (AMVs) extracted from satellite imagery are the
only wind observations with good global coverage. They are important features
for feeding numerical weather prediction (NWP) models. Several Bayesian models
have been proposed to estimate AMVs. Although critical for correct assimilation
into NWP models, very few methods provide a thorough characterization of the
estimation errors. The difficulty of estimating errors stems from the
specificity of the posterior distribution, which is both very high dimensional,
and highly ill-conditioned due to a singular likelihood. Motivated by this
difficult inverse problem, this work studies the evaluation of the (expected)
estimation errors using gradient-based Markov Chain Monte Carlo (MCMC)
algorithms. The main contribution is to propose a general strategy, called here
chilling, which amounts to sampling a local approximation of the posterior
distribution in the neighborhood of a point estimate. From a theoretical point
of view, we show that under regularity assumptions, the family of chilled
posterior distributions converges in distribution as temperature decreases to
an optimal Gaussian approximation at a point estimate given by the Maximum A
Posteriori, also known as the Laplace approximation. Chilled sampling therefore
provides access to this approximation generally out of reach in such
high-dimensional nonlinear contexts. From an empirical perspective, we evaluate
the proposed approach based on some quantitative Bayesian criteria. Our
numerical simulations are performed on synthetic and real meteorological data.
They reveal that not only the proposed chilling exhibits a significant gain in
terms of accuracy of the point estimates and of their associated expected
errors, but also a substantial acceleration in the convergence speed of the
MCMC algorithms.
Related papers
- In-Context Parametric Inference: Point or Distribution Estimators? [66.22308335324239]
We show that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
Our experiments indicate that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
arXiv Detail & Related papers (2025-02-17T10:00:24Z) - von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - ExtremeCast: Boosting Extreme Value Prediction for Global Weather Forecast [57.6987191099507]
We introduce Exloss, a novel loss function that performs asymmetric optimization and highlights extreme values to obtain accurate extreme weather forecast.
We also introduce ExBooster, which captures the uncertainty in prediction outcomes by employing multiple random samples.
Our solution can achieve state-of-the-art performance in extreme weather prediction, while maintaining the overall forecast accuracy comparable to the top medium-range forecast models.
arXiv Detail & Related papers (2024-02-02T10:34:13Z) - Learned harmonic mean estimation of the marginal likelihood with
normalizing flows [6.219412541001482]
We introduce the use of normalizing flows to represent the importance sampling target distribution.
The code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows.
arXiv Detail & Related papers (2023-06-30T18:00:02Z) - Scalable method for Bayesian experimental design without integrating
over posterior distribution [0.0]
We address the computational efficiency in solving the A-optimal Bayesian design of experiments problems.
A-optimality is a widely used and easy-to-interpret criterion for Bayesian experimental design.
This study presents a novel likelihood-free approach to the A-optimal experimental design.
arXiv Detail & Related papers (2023-06-30T12:40:43Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Probabilistic Mass Mapping with Neural Score Estimation [4.079848600120986]
We introduce a novel methodology for efficient sampling of the high-dimensional Bayesian posterior of the weak lensing mass-mapping problem.
We aim to demonstrate the accuracy of the method on simulations, and then proceed to applying it to the mass reconstruction of the HST/ACS COSMOS field.
arXiv Detail & Related papers (2022-01-14T17:07:48Z) - Residual Overfit Method of Exploration [78.07532520582313]
We propose an approximate exploration methodology based on fitting only two point estimates, one tuned and one overfit.
The approach drives exploration towards actions where the overfit model exhibits the most overfitting compared to the tuned model.
We compare ROME against a set of established contextual bandit methods on three datasets and find it to be one of the best performing.
arXiv Detail & Related papers (2021-10-06T17:05:33Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Estimating Basis Functions in Massive Fields under the Spatial Mixed
Effects Model [8.528384027684194]
For massive datasets, fixed rank kriging using the Expectation-Maximization (EM) algorithm for estimation has been proposed as an alternative to the usual but computationally prohibitive kriging method.
We develop an alternative method that utilizes the Spatial Mixed Effects (SME) model, but allows for additional flexibility by estimating the range of the spatial dependence between the observations and the knots via an Alternating Expectation Conditional Maximization (AECM) algorithm.
Experiments show that our methodology improves estimation without sacrificing prediction accuracy while also minimizing the additional computational burden of extra parameter estimation.
arXiv Detail & Related papers (2020-03-12T19:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.