Extending the statistical software package Engine for Likelihood-Free
Inference
- URL: http://arxiv.org/abs/2011.03977v1
- Date: Sun, 8 Nov 2020 13:22:37 GMT
- Title: Extending the statistical software package Engine for Likelihood-Free
Inference
- Authors: Vasileios Gkolemis, Michael Gutmann
- Abstract summary: This dissertation focuses on the implementation of the Robust optimisation Monte Carlo (ROMC) method in the software package Engine for Likelihood-Free Inference (ELFI)
Our implementation provides a robust and efficient solution to a practitioner who wants to perform inference on a simulator-based model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian inference is a principled framework for dealing with uncertainty.
The practitioner can perform an initial assumption for the physical phenomenon
they want to model (prior belief), collect some data and then adjust the
initial assumption in the light of the new evidence (posterior belief).
Approximate Bayesian Computation (ABC) methods, also known as likelihood-free
inference techniques, are a class of models used for performing inference when
the likelihood is intractable. The unique requirement of these models is a
black-box sampling machine. Due to the modelling-freedom they provide these
approaches are particularly captivating. Robust Optimisation Monte Carlo (ROMC)
is one of the most recent techniques of the specific domain. It approximates
the posterior distribution by solving independent optimisation problems. This
dissertation focuses on the implementation of the ROMC method in the software
package Engine for Likelihood-Free Inference (ELFI). In the first chapters, we
provide the mathematical formulation and the algorithmic description of the
ROMC approach. In the following chapters, we describe our implementation; (a)
we present all the functionalities provided to the user and (b) we demonstrate
how to perform inference on some real examples. Our implementation provides a
robust and efficient solution to a practitioner who wants to perform inference
on a simulator-based model. Furthermore, it exploits parallel processing for
accelerating the inference wherever it is possible. Finally, it has been
designed to serve extensibility; the user can easily replace specific subparts
of the method without significant overhead on the development side. Therefore,
it can be used by a researcher for further experimentation.
Related papers
- Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Gaussian Process Probes (GPP) for Uncertainty-Aware Probing [61.91898698128994]
We introduce a unified and simple framework for probing and measuring uncertainty about concepts represented by models.
Our experiments show it can (1) probe a model's representations of concepts even with a very small number of examples, (2) accurately measure both epistemic uncertainty (how confident the probe is) and aleatory uncertainty (how fuzzy the concepts are to the model), and (3) detect out of distribution data using those uncertainty measures as well as classic methods do.
arXiv Detail & Related papers (2023-05-29T17:00:16Z) - Exact Bayesian Inference on Discrete Models via Probability Generating
Functions: A Probabilistic Programming Approach [7.059472280274009]
We present an exact Bayesian inference method for discrete statistical models.
We use a probabilistic programming language that supports discrete and continuous sampling, discrete observations, affine functions, (stochastic) branching, and conditioning on discrete events.
Our inference method is provably correct and fully automated.
arXiv Detail & Related papers (2023-05-26T16:09:59Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Approximate Bayesian inference from noisy likelihoods with Gaussian
process emulated MCMC [0.24275655667345403]
We model the log-likelihood function using a Gaussian process (GP)
The main methodological innovation is to apply this model to emulate the progression that an exact Metropolis-Hastings (MH) sampler would take.
The resulting approximate sampler is conceptually simple and sample-efficient.
arXiv Detail & Related papers (2021-04-08T17:38:02Z) - Bayesian Inference Forgetting [82.6681466124663]
The right to be forgotten has been legislated in many countries but the enforcement in machine learning would cause unbearable costs.
This paper proposes a it Bayesian inference forgetting (BIF) framework to realize the right to be forgotten in Bayesian inference.
arXiv Detail & Related papers (2021-01-16T09:52:51Z) - The FMRIB Variational Bayesian Inference Tutorial II: Stochastic
Variational Bayes [1.827510863075184]
This tutorial revisits the original FMRIB Variational Bayes tutorial.
This new approach bears a lot of similarity to, and has benefited from, computational methods applied to machine learning algorithms.
arXiv Detail & Related papers (2020-07-03T11:31:52Z) - A Practical Introduction to Bayesian Estimation of Causal Effects:
Parametric and Nonparametric Approaches [0.0]
We provide an introduction to Bayesian inference for causal effects for practicing statisticians.
We demonstrate how priors can induce shrinkage and sparsity on parametric models.
Inference in the point-treatment and time-varying treatment settings are considered.
arXiv Detail & Related papers (2020-04-15T22:32:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.