SBI -- A toolkit for simulation-based inference
- URL: http://arxiv.org/abs/2007.09114v2
- Date: Wed, 22 Jul 2020 15:43:36 GMT
- Title: SBI -- A toolkit for simulation-based inference
- Authors: Alvaro Tejero-Cantero (1), Jan Boelts (1), Michael Deistler (1),
Jan-Matthis Lueckmann (1), Conor Durkan (2), Pedro J. Gon\c{c}alves (1, 3),
David S. Greenberg (1, 4) and Jakob H. Macke (1, 5, 6) ((1) Computational
Neuroengineering, Department of Electrical and Computer Engineering,
Technical University of Munich, (2) School of Informatics, University of
Edinburgh, (3) Neural Systems Analysis, Center of Advanced European Studies
and Research (caesar), Bonn, (4) Model-Driven Machine Learning, Centre for
Materials and Coastal Research, Helmholtz-Zentrum Geesthacht, (5) Machine
Learning in Science, University of T\"ubingen, (6) Empirical Inference, Max
Planck Institute for Intelligent Systems, T\"ubingen)
- Abstract summary: Simulation-based inference ( SBI) seeks to identify parameter sets that a) are compatible with prior knowledge and b) match empirical observations.
We present $textttsbi$, a PyTorch-based package that implements SBI algorithms based on neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Scientists and engineers employ stochastic numerical simulators to model
empirically observed phenomena. In contrast to purely statistical models,
simulators express scientific principles that provide powerful inductive
biases, improve generalization to new data or scenarios and allow for fewer,
more interpretable and domain-relevant parameters. Despite these advantages,
tuning a simulator's parameters so that its outputs match data is challenging.
Simulation-based inference (SBI) seeks to identify parameter sets that a) are
compatible with prior knowledge and b) match empirical observations.
Importantly, SBI does not seek to recover a single 'best' data-compatible
parameter set, but rather to identify all high probability regions of parameter
space that explain observed data, and thereby to quantify parameter
uncertainty. In Bayesian terminology, SBI aims to retrieve the posterior
distribution over the parameters of interest. In contrast to conventional
Bayesian inference, SBI is also applicable when one can run model simulations,
but no formula or algorithm exists for evaluating the probability of data given
parameters, i.e. the likelihood. We present $\texttt{sbi}$, a PyTorch-based
package that implements SBI algorithms based on neural networks. $\texttt{sbi}$
facilitates inference on black-box simulators for practising scientists and
engineers by providing a unified interface to state-of-the-art algorithms
together with documentation and tutorials.
Related papers
- Embed and Emulate: Contrastive representations for simulation-based inference [11.543221890134399]
This paper introduces Embed and Emulate (E&E), a new simulation-based inference ( SBI) method based on contrastive learning.
E&E learns a low-dimensional latent embedding of the data and a corresponding fast emulator in the latent space.
We demonstrate superior performance over existing methods in a realistic, non-identifiable parameter estimation task.
arXiv Detail & Related papers (2024-09-27T02:37:01Z) - Preconditioned Neural Posterior Estimation for Likelihood-free Inference [5.651060979874024]
We show in this paper that the neural posterior estimator (NPE) methods are not guaranteed to be highly accurate, even on problems with low dimension.
We propose preconditioned NPE and its sequential version (PSNPE), which uses a short run of ABC to effectively eliminate regions of parameter space that produce large discrepancy between simulations and data.
We present comprehensive empirical evidence that this melding of neural and statistical SBI methods improves performance over a range of examples.
arXiv Detail & Related papers (2024-04-21T07:05:38Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Generalized Bayesian Inference for Scientific Simulators via Amortized
Cost Estimation [11.375835331641548]
We train a neural network to approximate the cost function, which we define as the expected distance between simulations produced by a parameter and observed data.
We show that, on several benchmark tasks, ACE accurately predicts cost and provides predictive simulations that are closer to synthetic observations than other SBI methods.
arXiv Detail & Related papers (2023-05-24T14:45:03Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - Efficient identification of informative features in simulation-based
inference [5.538076164981993]
We show that one can marginalize the trained surrogate likelihood post-hoc before inferring the posterior to assess the contribution of a feature.
We demonstrate the usefulness of our method by identifying the most important features for inferring parameters of an example HH neuron model.
arXiv Detail & Related papers (2022-10-21T12:35:46Z) - Learning Summary Statistics for Bayesian Inference with Autoencoders [58.720142291102135]
We use the inner dimension of deep neural network based Autoencoders as summary statistics.
To create an incentive for the encoder to encode all the parameter-related information but not the noise, we give the decoder access to explicit or implicit information that has been used to generate the training data.
arXiv Detail & Related papers (2022-01-28T12:00:31Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.