Misspecification-robust Sequential Neural Likelihood for
Simulation-based Inference
- URL: http://arxiv.org/abs/2301.13368v2
- Date: Thu, 7 Mar 2024 11:31:17 GMT
- Title: Misspecification-robust Sequential Neural Likelihood for
Simulation-based Inference
- Authors: Ryan P. Kelly and David J. Nott and David T. Frazier and David J.
Warne and Chris Drovandi
- Abstract summary: We propose a novel SNL method, which through the incorporation of additional adjustment parameters, is robust to model misspecification.
We demonstrate the efficacy of our approach through several illustrative examples.
- Score: 0.20971479389679337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulation-based inference techniques are indispensable for parameter
estimation of mechanistic and simulable models with intractable likelihoods.
While traditional statistical approaches like approximate Bayesian computation
and Bayesian synthetic likelihood have been studied under well-specified and
misspecified settings, they often suffer from inefficiencies due to wasted
model simulations. Neural approaches, such as sequential neural likelihood
(SNL) avoid this wastage by utilising all model simulations to train a neural
surrogate for the likelihood function. However, the performance of SNL under
model misspecification is unreliable and can result in overconfident posteriors
centred around an inaccurate parameter estimate. In this paper, we propose a
novel SNL method, which through the incorporation of additional adjustment
parameters, is robust to model misspecification and capable of identifying
features of the data that the model is not able to recover. We demonstrate the
efficacy of our approach through several illustrative examples, where our
method gives more accurate point estimates and uncertainty quantification than
SNL.
Related papers
- Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Simulation-based inference using surjective sequential neural likelihood
estimation [50.24983453990065]
Surjective Sequential Neural Likelihood estimation is a novel method for simulation-based inference.
By embedding the data in a low-dimensional space, SSNL solves several issues previous likelihood-based methods had when applied to high-dimensional data sets.
arXiv Detail & Related papers (2023-08-02T10:02:38Z) - Learning Robust Statistics for Simulation-based Inference under Model
Misspecification [23.331522354991527]
We propose the first general approach to handle model misspecification that works across different classes of simulation-based inference methods.
We show that our method yields robust inference in misspecified scenarios, whilst still being accurate when the model is well-specified.
arXiv Detail & Related papers (2023-05-25T09:06:26Z) - Robust Neural Posterior Estimation and Statistical Model Criticism [1.5749416770494706]
We argue that modellers must treat simulators as idealistic representations of the true data generating process.
In this work we revisit neural posterior estimation (NPE), a class of algorithms that enable black-box parameter inference in simulation models.
We find that the presence of misspecification, in contrast, leads to unreliable inference when NPE is used naively.
arXiv Detail & Related papers (2022-10-12T20:06:55Z) - Investigating the Impact of Model Misspecification in Neural
Simulation-based Inference [1.933681537640272]
We study the behaviour of neural SBI algorithms in the presence of various forms of model misspecification.
We find that misspecification can have a profoundly deleterious effect on performance.
We conclude that new approaches are required to address model misspecification if neural SBI algorithms are to be relied upon to derive accurate conclusions.
arXiv Detail & Related papers (2022-09-05T09:08:16Z) - A Statistical Decision-Theoretical Perspective on the Two-Stage Approach
to Parameter Estimation [7.599399338954307]
Two-Stage (TS) Approach can be applied to obtain reliable parametric estimates.
We show how to apply the TS approach on models for independent and identically distributed samples.
arXiv Detail & Related papers (2022-03-31T18:19:47Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Likelihood-Free Inference in State-Space Models with Unknown Dynamics [71.94716503075645]
We introduce a method for inferring and predicting latent states in state-space models where observations can only be simulated, and transition dynamics are unknown.
We propose a way of doing likelihood-free inference (LFI) of states and state prediction with a limited number of simulations.
arXiv Detail & Related papers (2021-11-02T12:33:42Z) - Neural Networks for Parameter Estimation in Intractable Models [0.0]
We show how to estimate parameters from max-stable processes, where inference is exceptionally challenging.
We use data from model simulations as input and train deep neural networks to learn statistical parameters.
arXiv Detail & Related papers (2021-07-29T21:59:48Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Sinkhorn Natural Gradient for Generative Models [125.89871274202439]
We propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically.
In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
arXiv Detail & Related papers (2020-11-09T02:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.