Learned harmonic mean estimation of the marginal likelihood with
normalizing flows
- URL: http://arxiv.org/abs/2307.00048v3
- Date: Fri, 19 Jan 2024 12:09:12 GMT
- Title: Learned harmonic mean estimation of the marginal likelihood with
normalizing flows
- Authors: Alicja Polanska, Matthew A. Price, Alessio Spurio Mancini, and Jason
D. McEwen
- Abstract summary: We introduce the use of normalizing flows to represent the importance sampling target distribution.
The code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows.
- Score: 6.219412541001482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computing the marginal likelihood (also called the Bayesian model evidence)
is an important task in Bayesian model selection, providing a principled
quantitative way to compare models. The learned harmonic mean estimator solves
the exploding variance problem of the original harmonic mean estimation of the
marginal likelihood. The learned harmonic mean estimator learns an importance
sampling target distribution that approximates the optimal distribution. While
the approximation need not be highly accurate, it is critical that the
probability mass of the learned distribution is contained within the posterior
in order to avoid the exploding variance problem. In previous work a bespoke
optimization problem is introduced when training models in order to ensure this
property is satisfied. In the current article we introduce the use of
normalizing flows to represent the importance sampling target distribution. A
flow-based model is trained on samples from the posterior by maximum likelihood
estimation. Then, the probability density of the flow is concentrated by
lowering the variance of the base distribution, i.e. by lowering its
"temperature", ensuring its probability mass is contained within the posterior.
This approach avoids the need for a bespoke optimisation problem and careful
fine tuning of parameters, resulting in a more robust method. Moreover, the use
of normalizing flows has the potential to scale to high dimensional settings.
We present preliminary experiments demonstrating the effectiveness of the use
of flows for the learned harmonic mean estimator. The harmonic code
implementing the learned harmonic mean, which is publicly available, has been
updated to now support normalizing flows.
Related papers
- Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.
We show how such maps can be learned via standard DBM training using a novel noise schedule.
The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - Deep Evidential Learning for Bayesian Quantile Regression [3.6294895527930504]
It is desirable to have accurate uncertainty estimation from a single deterministic forward-pass model.
This paper proposes a deep Bayesian quantile regression model that can estimate the quantiles of a continuous target distribution without the Gaussian assumption.
arXiv Detail & Related papers (2023-08-21T11:42:16Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Building Normalizing Flows with Stochastic Interpolants [11.22149158986164]
A simple generative quadratic model based on a continuous-time normalizing flow between any pair of base and target distributions is proposed.
The velocity field of this flow is inferred from the probability current of a time-dependent distribution that interpolates between the base and the target in finite time.
arXiv Detail & Related papers (2022-09-30T16:30:31Z) - Normalizing Flows for Interventional Density Estimation [18.640006398066188]
We propose a novel, fully-parametric deep learning method called Interventional Normalizing Flows.
We combine two normalizing flows, namely (i) a nuisance flow for estimating nuisance parameters and (ii) a target flow for parametric estimation of the density of potential outcomes.
arXiv Detail & Related papers (2022-09-13T17:56:13Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Composing Normalizing Flows for Inverse Problems [89.06155049265641]
We propose a framework for approximate inference that estimates the target conditional as a composition of two flow models.
Our method is evaluated on a variety of inverse problems and is shown to produce high-quality samples with uncertainty.
arXiv Detail & Related papers (2020-02-26T19:01:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.