Approximate inference of marginals using the IBIA framework
- URL: http://arxiv.org/abs/2306.00335v2
- Date: Sat, 28 Oct 2023 11:36:41 GMT
- Title: Approximate inference of marginals using the IBIA framework
- Authors: Shivani Bathla, Vinita Vasudevan
- Abstract summary: Exact inference of marginals in probabilistic graphical models (PGM) is known to be intractable.
We propose a new algorithm for marginal inference that is based on the incremental build-infer-approximate (IBIA) paradigm.
Our method gives either better or comparable accuracy than existing variational and sampling based methods, with smaller runtimes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Exact inference of marginals in probabilistic graphical models (PGM) is known
to be intractable, necessitating the use of approximate methods. Most of the
existing variational techniques perform iterative message passing in loopy
graphs which is slow to converge for many benchmarks. In this paper, we propose
a new algorithm for marginal inference that is based on the incremental
build-infer-approximate (IBIA) paradigm. Our algorithm converts the PGM into a
sequence of linked clique tree forests (SLCTF) with bounded clique sizes, and
then uses a heuristic belief update algorithm to infer the marginals. For the
special case of Bayesian networks, we show that if the incremental build step
in IBIA uses the topological order of variables then (a) the prior marginals
are consistent in all CTFs in the SLCTF and (b) the posterior marginals are
consistent once all evidence variables are added to the SLCTF. In our approach,
the belief propagation step is non-iterative and the accuracy-complexity
trade-off is controlled using user-defined clique size bounds. Results for
several benchmark sets from recent UAI competitions show that our method gives
either better or comparable accuracy than existing variational and sampling
based methods, with smaller runtimes.
Related papers
- Batch, match, and patch: low-rank approximations for score-based variational inference [8.840147522046651]
Black-box variational inference scales poorly to high dimensional problems.
We extend the batch-and-match framework for score-based BBVI.
We evaluate this approach on a variety of synthetic target distributions and real-world problems in high-dimensional inference.
arXiv Detail & Related papers (2024-10-29T17:42:56Z) - Maximum a Posteriori Inference for Factor Graphs via Benders' Decomposition [0.38233569758620056]
We present a method for maximum a-posteriori inference in general Bayesian factor models.
We derive MAP estimation algorithms for the Bayesian Gaussian mixture model and latent Dirichlet allocation.
arXiv Detail & Related papers (2024-10-24T19:57:56Z) - IBIA: An Incremental Build-Infer-Approximate Framework for Approximate
Inference of Partition Function [0.0]
Exact computation of the partition function is known to be intractable.
We propose a novel incremental build-infer-approximate framework for approximate inference.
We show that the framework can be used to efficiently compute the partition function.
arXiv Detail & Related papers (2023-04-13T09:40:23Z) - Bayesian Pseudo-Coresets via Contrastive Divergence [5.479797073162603]
We introduce a novel approach for constructing pseudo-coresets by utilizing contrastive divergence.
It eliminates the need for approximations in the pseudo-coreset construction process.
We conduct extensive experiments on multiple datasets, demonstrating its superiority over existing BPC techniques.
arXiv Detail & Related papers (2023-03-20T17:13:50Z) - Sharp Variance-Dependent Bounds in Reinforcement Learning: Best of Both
Worlds in Stochastic and Deterministic Environments [48.96971760679639]
We study variance-dependent regret bounds for Markov decision processes (MDPs)
We propose two new environment norms to characterize the fine-grained variance properties of the environment.
For model-based methods, we design a variant of the MVP algorithm.
In particular, this bound is simultaneously minimax optimal for both and deterministic MDPs.
arXiv Detail & Related papers (2023-01-31T06:54:06Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Bayesian decision-making under misspecified priors with applications to
meta-learning [64.38020203019013]
Thompson sampling and other sequential decision-making algorithms are popular approaches to tackle explore/exploit trade-offs in contextual bandits.
We show that performance degrades gracefully with misspecified priors.
arXiv Detail & Related papers (2021-07-03T23:17:26Z) - Laplace Matching for fast Approximate Inference in Generalized Linear
Models [27.70274403550477]
We propose an approximate inference framework primarily designed to be computationally cheap while still achieving high approximation quality.
The concept, which we call emphLaplace Matching, involves closed-form, approximate, bi-directional transformations between the parameter spaces of exponential families.
This effectively turns inference in GLMs into conjugate inference (with small approximation errors)
arXiv Detail & Related papers (2021-05-07T08:25:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.