Optimally-Weighted Estimators of the Maximum Mean Discrepancy for
Likelihood-Free Inference
- URL: http://arxiv.org/abs/2301.11674v4
- Date: Wed, 10 May 2023 14:19:41 GMT
- Title: Optimally-Weighted Estimators of the Maximum Mean Discrepancy for
Likelihood-Free Inference
- Authors: Ayush Bharti, Masha Naslidnyk, Oscar Key, Samuel Kaski,
Fran\c{c}ois-Xavier Briol
- Abstract summary: Likelihood-free inference methods typically make use of a distance between simulated and real data.
The maximum mean discrepancy (MMD) is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples.
We propose a novel estimator for the MMD with significantly improved sample complexity.
- Score: 12.157511906467146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Likelihood-free inference methods typically make use of a distance between
simulated and real data. A common example is the maximum mean discrepancy
(MMD), which has previously been used for approximate Bayesian computation,
minimum distance estimation, generalised Bayesian inference, and within the
nonparametric learning framework. The MMD is commonly estimated at a root-$m$
rate, where $m$ is the number of simulated samples. This can lead to
significant computational challenges since a large $m$ is required to obtain an
accurate estimate, which is crucial for parameter estimation. In this paper, we
propose a novel estimator for the MMD with significantly improved sample
complexity. The estimator is particularly well suited for computationally
expensive smooth simulators with low- to mid-dimensional inputs. This claim is
supported through both theoretical results and an extensive simulation study on
benchmark simulators.
Related papers
- Cosmological Analysis with Calibrated Neural Quantile Estimation and Approximate Simulators [0.0]
We introduce a new Simulation-Based Inference ( SBI) method that leverages a large number of approximate simulations for training and a small number of high-fidelity simulations for calibration.
As a proof of concept, we demonstrate that cosmological parameters can be inferred at field level from projected 2-dim dark matter density maps up to $k_rm maxsim1.5,h$/Mpc at $z=0$.
The calibrated posteriors closely match those obtained by directly training on $sim104$ expensive Particle-Particle (PP) simulations, but at a fraction of the computational cost
arXiv Detail & Related papers (2024-11-22T05:53:46Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Amortized Bayesian Decision Making for simulation-based models [11.375835331641548]
We address the question of how to perform Bayesian decision making on simulators.
Our method trains a neural network on simulated data and can predict the expected cost.
We then apply the method to infer optimal actions in a real-world simulator in the medical neurosciences.
arXiv Detail & Related papers (2023-12-05T11:29:54Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Robust Bayesian Inference for Simulator-based Models via the MMD
Posterior Bootstrap [13.448658162594604]
We propose a novel algorithm based on the posterior bootstrap and maximum mean discrepancy estimators.
This leads to a highly-parallelisable Bayesian inference algorithm with strong properties.
The approach is then assessed on a range of examples including a g-and-k distribution and a toggle-switch model.
arXiv Detail & Related papers (2022-02-09T22:12:19Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - $\gamma$-ABC: Outlier-Robust Approximate Bayesian Computation Based on a
Robust Divergence Estimator [95.71091446753414]
We propose to use a nearest-neighbor-based $gamma$-divergence estimator as a data discrepancy measure.
Our method achieves significantly higher robustness than existing discrepancy measures.
arXiv Detail & Related papers (2020-06-13T06:09:27Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - A Deep Learning Algorithm for High-Dimensional Exploratory Item Factor
Analysis [0.0]
We investigate a deep learning-based VI algorithm for exploratory item factor analysis (IFA) that is computationally fast even in large data sets with many latent factors.
The proposed approach applies a deep artificial neural network model called an importance-weighted autoencoder (IWAE) for exploratory IFA.
We show that the IWAE yields more accurate estimates as either the sample size or the number of IW samples increases.
arXiv Detail & Related papers (2020-01-22T03:02:34Z) - Efficient Debiased Evidence Estimation by Multilevel Monte Carlo
Sampling [0.0]
We propose a new optimization algorithm for Bayesian inference based multilevel Monte Carlo (MLMC) methods.
Our numerical results confirm considerable computational savings compared to the conventional estimators.
arXiv Detail & Related papers (2020-01-14T09:14:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.