Simulation-based inference with the Python Package sbijax
- URL: http://arxiv.org/abs/2409.19435v1
- Date: Sat, 28 Sep 2024 18:47:13 GMT
- Title: Simulation-based inference with the Python Package sbijax
- Authors: Simon Dirmeier, Simone Ulzega, Antonietta Mira, Carlo Albert,
- Abstract summary: sbijax is a Python package that implements a wide variety of state-of-the-art methods in neural simulation-based inference.
The package provides functionality for approximate Bayesian computation, to compute model diagnostics, and to automatically estimate summary statistics.
- Score: 0.7499722271664147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural simulation-based inference (SBI) describes an emerging family of methods for Bayesian inference with intractable likelihood functions that use neural networks as surrogate models. Here we introduce sbijax, a Python package that implements a wide variety of state-of-the-art methods in neural simulation-based inference using a user-friendly programming interface. sbijax offers high-level functionality to quickly construct SBI estimators, and compute and visualize posterior distributions with only a few lines of code. In addition, the package provides functionality for conventional approximate Bayesian computation, to compute model diagnostics, and to automatically estimate summary statistics. By virtue of being entirely written in JAX, sbijax is extremely computationally efficient, allowing rapid training of neural networks and executing code automatically in parallel on both CPU and GPU.
Related papers
- A variational neural Bayes framework for inference on intractable posterior distributions [1.0801976288811024]
Posterior distributions of model parameters are efficiently obtained by feeding observed data into a trained neural network.
We show theoretically that our posteriors converge to the true posteriors in Kullback-Leibler divergence.
arXiv Detail & Related papers (2024-04-16T20:40:15Z) - Fast, accurate and lightweight sequential simulation-based inference using Gaussian locally linear mappings [0.820217860574125]
We propose an alternative to "simulation-based inference" ( SBI) that provides both approximations to the likelihood and the posterior distribution.
Our approach produces accurate posterior inference when compared to state-of-the-art NN-based SBI methods, even for multimodal posteriors.
We illustrate our results on several benchmark models from the SBI literature and on a biological model of the translation kinetics after mRNA transfection.
arXiv Detail & Related papers (2024-03-12T09:48:17Z) - Pseudo-Likelihood Inference [16.934708242852558]
Pseudo-Likelihood Inference (PLI) is a new method that brings neural approximation into ABC, making it competitive on challenging Bayesian system identification tasks.
PLI allows for optimizing neural posteriors via gradient descent, does not rely on summary statistics, and enables multiple observations as input.
The effectiveness of PLI is evaluated on four classical SBI benchmark tasks and on a highly dynamic physical system.
arXiv Detail & Related papers (2023-11-28T10:17:52Z) - BayesFlow: Amortized Bayesian Workflows With Neural Networks [0.0]
This manuscript introduces the Python library BayesFlow for simulation-based training of established neural network architectures for amortized data compression and inference.
Amortized Bayesian inference, as implemented in BayesFlow, enables users to train custom neural networks on model simulations and re-use these networks for any subsequent application of the models.
arXiv Detail & Related papers (2023-06-28T08:41:49Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits [99.59941892183454]
We propose Einsum Networks (EiNets), a novel implementation design for PCs.
At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation.
We show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation.
arXiv Detail & Related papers (2020-04-13T23:09:15Z) - BayesFlow: Learning complex stochastic models with invertible neural
networks [3.1498833540989413]
We propose a novel method for globally amortized Bayesian inference based on invertible neural networks.
BayesFlow incorporates a summary network trained to embed the observed data into maximally informative summary statistics.
We demonstrate the utility of BayesFlow on challenging intractable models from population dynamics, epidemiology, cognitive science and ecology.
arXiv Detail & Related papers (2020-03-13T13:39:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.