Extending the statistical software package Engine for Likelihood-Free
Inference
- URL: http://arxiv.org/abs/2011.03977v1
- Date: Sun, 8 Nov 2020 13:22:37 GMT
- Title: Extending the statistical software package Engine for Likelihood-Free
Inference
- Authors: Vasileios Gkolemis, Michael Gutmann
- Abstract summary: This dissertation focuses on the implementation of the Robust optimisation Monte Carlo (ROMC) method in the software package Engine for Likelihood-Free Inference (ELFI)
Our implementation provides a robust and efficient solution to a practitioner who wants to perform inference on a simulator-based model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian inference is a principled framework for dealing with uncertainty.
The practitioner can perform an initial assumption for the physical phenomenon
they want to model (prior belief), collect some data and then adjust the
initial assumption in the light of the new evidence (posterior belief).
Approximate Bayesian Computation (ABC) methods, also known as likelihood-free
inference techniques, are a class of models used for performing inference when
the likelihood is intractable. The unique requirement of these models is a
black-box sampling machine. Due to the modelling-freedom they provide these
approaches are particularly captivating. Robust Optimisation Monte Carlo (ROMC)
is one of the most recent techniques of the specific domain. It approximates
the posterior distribution by solving independent optimisation problems. This
dissertation focuses on the implementation of the ROMC method in the software
package Engine for Likelihood-Free Inference (ELFI). In the first chapters, we
provide the mathematical formulation and the algorithmic description of the
ROMC approach. In the following chapters, we describe our implementation; (a)
we present all the functionalities provided to the user and (b) we demonstrate
how to perform inference on some real examples. Our implementation provides a
robust and efficient solution to a practitioner who wants to perform inference
on a simulator-based model. Furthermore, it exploits parallel processing for
accelerating the inference wherever it is possible. Finally, it has been
designed to serve extensibility; the user can easily replace specific subparts
of the method without significant overhead on the development side. Therefore,
it can be used by a researcher for further experimentation.
Related papers
- Model-free Methods for Event History Analysis and Efficient Adjustment (PhD Thesis) [55.2480439325792]
This thesis is a series of independent contributions to statistics unified by a model-free perspective.
The first chapter elaborates on how a model-free perspective can be used to formulate flexible methods that leverage prediction techniques from machine learning.
The second chapter studies the concept of local independence, which describes whether the evolution of one process is directly influenced by another.
arXiv Detail & Related papers (2025-02-11T19:24:09Z) - Information-theoretic Bayesian Optimization: Survey and Tutorial [2.3931689873603603]
This paper is about the information theoretical acquisition functions, whose performance typically outperforms the rest acquisition functions.
We also cover how information theory acquisition functions can be adapted to complex optimization scenarios such as the multi-objective, constrained, non-myopic, multi-fidelity, parallel and asynchronous.
arXiv Detail & Related papers (2025-01-22T10:54:15Z) - Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review [59.856222854472605]
This tutorial provides an in-depth guide on inference-time guidance and alignment methods for optimizing downstream reward functions in diffusion models.
practical applications in fields such as biology often require sample generation that maximizes specific metrics.
We discuss (1) fine-tuning methods combined with inference-time techniques, (2) inference-time algorithms based on search algorithms such as Monte Carlo tree search, and (3) connections between inference-time algorithms in language models and diffusion models.
arXiv Detail & Related papers (2025-01-16T17:37:35Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Gaussian Process Probes (GPP) for Uncertainty-Aware Probing [61.91898698128994]
We introduce a unified and simple framework for probing and measuring uncertainty about concepts represented by models.
Our experiments show it can (1) probe a model's representations of concepts even with a very small number of examples, (2) accurately measure both epistemic uncertainty (how confident the probe is) and aleatory uncertainty (how fuzzy the concepts are to the model), and (3) detect out of distribution data using those uncertainty measures as well as classic methods do.
arXiv Detail & Related papers (2023-05-29T17:00:16Z) - Exact Bayesian Inference on Discrete Models via Probability Generating
Functions: A Probabilistic Programming Approach [7.059472280274009]
We present an exact Bayesian inference method for discrete statistical models.
We use a probabilistic programming language that supports discrete and continuous sampling, discrete observations, affine functions, (stochastic) branching, and conditioning on discrete events.
Our inference method is provably correct and fully automated.
arXiv Detail & Related papers (2023-05-26T16:09:59Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Bayesian Inference Forgetting [82.6681466124663]
The right to be forgotten has been legislated in many countries but the enforcement in machine learning would cause unbearable costs.
This paper proposes a it Bayesian inference forgetting (BIF) framework to realize the right to be forgotten in Bayesian inference.
arXiv Detail & Related papers (2021-01-16T09:52:51Z) - The FMRIB Variational Bayesian Inference Tutorial II: Stochastic
Variational Bayes [1.827510863075184]
This tutorial revisits the original FMRIB Variational Bayes tutorial.
This new approach bears a lot of similarity to, and has benefited from, computational methods applied to machine learning algorithms.
arXiv Detail & Related papers (2020-07-03T11:31:52Z) - Elements of Sequential Monte Carlo [21.1067925312595]
Core problem in statistics and machine learning is to compute probability distributions and expectations.
Key challenge is to approximate these intractable expectations.
sequential Monte Carlo (SMC) is a random-sampling-based class of methods for approximate inference.
arXiv Detail & Related papers (2019-03-12T09:28:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.