Neural Posterior Estimation with Differentiable Simulators
- URL: http://arxiv.org/abs/2207.05636v1
- Date: Tue, 12 Jul 2022 16:08:04 GMT
- Title: Neural Posterior Estimation with Differentiable Simulators
- Authors: Justine Zeghal, Fran\c{c}ois Lanusse, Alexandre Boucaud, Benjamin
Remy, Eric Aubourg
- Abstract summary: We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
- Score: 58.720142291102135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulation-Based Inference (SBI) is a promising Bayesian inference framework
that alleviates the need for analytic likelihoods to estimate posterior
distributions. Recent advances using neural density estimators in SBI
algorithms have demonstrated the ability to achieve high-fidelity posteriors,
at the expense of a large number of simulations ; which makes their application
potentially very time-consuming when using complex physical simulations. In
this work we focus on boosting the sample-efficiency of posterior density
estimation using the gradients of the simulator. We present a new method to
perform Neural Posterior Estimation (NPE) with a differentiable simulator. We
demonstrate how gradient information helps constrain the shape of the posterior
and improves sample-efficiency.
Related papers
- Compositional simulation-based inference for time series [21.9975782468709]
simulators frequently emulate real-world dynamics through thousands of single-state transitions over time.
We propose an SBI framework that can exploit such Markovian simulators by locally identifying parameters consistent with individual state transitions.
We then compose these local results to obtain a posterior over parameters that align with the entire time series observation.
arXiv Detail & Related papers (2024-11-05T01:55:07Z) - Preconditioned Neural Posterior Estimation for Likelihood-free Inference [5.651060979874024]
We show in this paper that the neural posterior estimator (NPE) methods are not guaranteed to be highly accurate, even on problems with low dimension.
We propose preconditioned NPE and its sequential version (PSNPE), which uses a short run of ABC to effectively eliminate regions of parameter space that produce large discrepancy between simulations and data.
We present comprehensive empirical evidence that this melding of neural and statistical SBI methods improves performance over a range of examples.
arXiv Detail & Related papers (2024-04-21T07:05:38Z) - A variational neural Bayes framework for inference on intractable posterior distributions [1.0801976288811024]
Posterior distributions of model parameters are efficiently obtained by feeding observed data into a trained neural network.
We show theoretically that our posteriors converge to the true posteriors in Kullback-Leibler divergence.
arXiv Detail & Related papers (2024-04-16T20:40:15Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Simulation-Based Inference with Quantile Regression [0.0]
We present Neural Quantile Estimation (NQE), a novel Simulation-Based Inference ( SBI) method based on conditional quantile regression.
NQE autoregressively learns individual one dimensional quantiles for each posterior dimension, conditioned on the data and previous posterior dimensions.
We demonstrate NQE achieves state-of-the-art performance on a variety of benchmark problems.
arXiv Detail & Related papers (2024-01-04T18:53:50Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Likelihood-Free Inference in State-Space Models with Unknown Dynamics [71.94716503075645]
We introduce a method for inferring and predicting latent states in state-space models where observations can only be simulated, and transition dynamics are unknown.
We propose a way of doing likelihood-free inference (LFI) of states and state prediction with a limited number of simulations.
arXiv Detail & Related papers (2021-11-02T12:33:42Z) - Truncated Marginal Neural Ratio Estimation [5.438798591410838]
We present a neural simulator-based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability.
Our approach is simulation efficient by simultaneously estimating low-dimensional marginal posteriors instead of the joint posterior.
By estimating a locally amortized posterior our algorithm enables efficient empirical tests of the robustness of the inference results.
arXiv Detail & Related papers (2021-07-02T18:00:03Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Prediction of progressive lens performance from neural network
simulations [62.997667081978825]
The purpose of this study is to present a framework to predict visual acuity (VA) based on a convolutional neural network (CNN)
The proposed holistic simulation tool was shown to act as an accurate model for subjective visual performance.
arXiv Detail & Related papers (2021-03-19T14:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.