Learning Summary Statistics for Bayesian Inference with Autoencoders
- URL: http://arxiv.org/abs/2201.12059v1
- Date: Fri, 28 Jan 2022 12:00:31 GMT
- Title: Learning Summary Statistics for Bayesian Inference with Autoencoders
- Authors: Carlo Albert, Simone Ulzega, Firat Ozdemir, Fernando Perez-Cruz,
Antonietta Mira
- Abstract summary: We use the inner dimension of deep neural network based Autoencoders as summary statistics.
To create an incentive for the encoder to encode all the parameter-related information but not the noise, we give the decoder access to explicit or implicit information that has been used to generate the training data.
- Score: 58.720142291102135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For stochastic models with intractable likelihood functions, approximate
Bayesian computation offers a way of approximating the true posterior through
repeated comparisons of observations with simulated model outputs in terms of a
small set of summary statistics. These statistics need to retain the
information that is relevant for constraining the parameters but cancel out the
noise. They can thus be seen as thermodynamic state variables, for general
stochastic models. For many scientific applications, we need strictly more
summary statistics than model parameters to reach a satisfactory approximation
of the posterior. Therefore, we propose to use the inner dimension of deep
neural network based Autoencoders as summary statistics. To create an incentive
for the encoder to encode all the parameter-related information but not the
noise, we give the decoder access to explicit or implicit information on the
noise that has been used to generate the training data. We validate the
approach empirically on two types of stochastic models.
Related papers
- Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Neural Spline Search for Quantile Probabilistic Modeling [35.914279831992964]
We propose a non-parametric and data-driven approach, Neural Spline Search (NSS), to represent the observed data distribution without parametric assumptions.
We demonstrate that NSS outperforms previous methods on synthetic, real-world regression and time-series forecasting tasks.
arXiv Detail & Related papers (2023-01-12T07:45:28Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Neural Approximate Sufficient Statistics for Implicit Models [34.44047460667847]
We frame the task of constructing sufficient statistics as learning mutual information maximizing representations of the data with the help of deep neural networks.
We apply our approach to both traditional approximate Bayesian computation and recent neural likelihood methods, boosting their performance on a range of tasks.
arXiv Detail & Related papers (2020-10-20T07:11:40Z) - Generalized Multi-Output Gaussian Process Censored Regression [7.111443975103331]
We introduce a heteroscedastic multi-output Gaussian process model which combines the non-parametric flexibility of GPs with the ability to leverage information from correlated outputs under input-dependent noise conditions.
Results show how the added flexibility allows our model to better estimate the underlying non-censored (i.e. true) process under potentially complex censoring dynamics.
arXiv Detail & Related papers (2020-09-10T12:46:29Z) - BayesFlow: Learning complex stochastic models with invertible neural
networks [3.1498833540989413]
We propose a novel method for globally amortized Bayesian inference based on invertible neural networks.
BayesFlow incorporates a summary network trained to embed the observed data into maximally informative summary statistics.
We demonstrate the utility of BayesFlow on challenging intractable models from population dynamics, epidemiology, cognitive science and ecology.
arXiv Detail & Related papers (2020-03-13T13:39:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.