Truncated Marginal Neural Ratio Estimation
- URL: http://arxiv.org/abs/2107.01214v1
- Date: Fri, 2 Jul 2021 18:00:03 GMT
- Title: Truncated Marginal Neural Ratio Estimation
- Authors: Benjamin Kurt Miller, Alex Cole, Patrick Forr\'e, Gilles Louppe,
Christoph Weniger
- Abstract summary: We present a neural simulator-based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability.
Our approach is simulation efficient by simultaneously estimating low-dimensional marginal posteriors instead of the joint posterior.
By estimating a locally amortized posterior our algorithm enables efficient empirical tests of the robustness of the inference results.
- Score: 5.438798591410838
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Parametric stochastic simulators are ubiquitous in science, often featuring
high-dimensional input parameters and/or an intractable likelihood. Performing
Bayesian parameter inference in this context can be challenging. We present a
neural simulator-based inference algorithm which simultaneously offers
simulation efficiency and fast empirical posterior testability, which is unique
among modern algorithms. Our approach is simulation efficient by simultaneously
estimating low-dimensional marginal posteriors instead of the joint posterior
and by proposing simulations targeted to an observation of interest via a prior
suitably truncated by an indicator function. Furthermore, by estimating a
locally amortized posterior our algorithm enables efficient empirical tests of
the robustness of the inference results. Such tests are important for
sanity-checking inference in real-world applications, which do not feature a
known ground truth. We perform experiments on a marginalized version of the
simulation-based inference benchmark and two complex and narrow posteriors,
highlighting the simulator efficiency of our algorithm as well as the quality
of the estimated marginal posteriors. Implementation on GitHub.
Related papers
- Parallel simulation for sampling under isoperimetry and score-based diffusion models [56.39904484784127]
As data size grows, reducing the iteration cost becomes an important goal.
Inspired by the success of the parallel simulation of the initial value problem in scientific computation, we propose parallel Picard methods for sampling tasks.
Our work highlights the potential advantages of simulation methods in scientific computation for dynamics-based sampling and diffusion models.
arXiv Detail & Related papers (2024-12-10T11:50:46Z) - Active Sequential Posterior Estimation for Sample-Efficient Simulation-Based Inference [12.019504660711231]
We introduce sequential neural posterior estimation (ASNPE)
ASNPE brings an active learning scheme into the inference loop to estimate the utility of simulation parameter candidates to the underlying probabilistic model.
Our method outperforms well-tuned benchmarks and state-of-the-art posterior estimation methods on a large-scale real-world traffic network.
arXiv Detail & Related papers (2024-12-07T08:57:26Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Eliminating Ratio Bias for Gradient-based Simulated Parameter Estimation [0.7673339435080445]
This article addresses the challenge of parameter calibration in models where the likelihood function is not analytically available.
We propose a gradient-based simulated parameter estimation framework, leveraging a multi-time scale that tackles the issue of ratio bias in both maximum likelihood estimation and posterior density estimation problems.
arXiv Detail & Related papers (2024-11-20T02:46:15Z) - A variational neural Bayes framework for inference on intractable posterior distributions [1.0801976288811024]
Posterior distributions of model parameters are efficiently obtained by feeding observed data into a trained neural network.
We show theoretically that our posteriors converge to the true posteriors in Kullback-Leibler divergence.
arXiv Detail & Related papers (2024-04-16T20:40:15Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Neural Posterior Estimation with Differentiable Simulators [58.720142291102135]
We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
arXiv Detail & Related papers (2022-07-12T16:08:04Z) - Simulation-efficient marginal posterior estimation with swyft: stop
wasting your precious time [5.533353383316288]
We present algorithms for nested neural likelihood-to-evidence ratio estimation and simulation reuse.
Together, these algorithms enable automatic and extremely simulator efficient estimation of marginal and joint posteriors.
arXiv Detail & Related papers (2020-11-27T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.