IBIA: An Incremental Build-Infer-Approximate Framework for Approximate
Inference of Partition Function
- URL: http://arxiv.org/abs/2304.06366v2
- Date: Thu, 28 Sep 2023 11:07:49 GMT
- Title: IBIA: An Incremental Build-Infer-Approximate Framework for Approximate
Inference of Partition Function
- Authors: Shivani Bathla and Vinita Vasudevan
- Abstract summary: Exact computation of the partition function is known to be intractable.
We propose a novel incremental build-infer-approximate framework for approximate inference.
We show that the framework can be used to efficiently compute the partition function.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Exact computation of the partition function is known to be intractable,
necessitating approximate inference techniques. Existing methods for
approximate inference are slow to converge for many benchmarks. The control of
accuracy-complexity trade-off is also non-trivial in many of these methods. We
propose a novel incremental build-infer-approximate (IBIA) framework for
approximate inference that addresses these issues. In this framework, the
probabilistic graphical model is converted into a sequence of clique tree
forests (SCTF) with bounded clique sizes. We show that the SCTF can be used to
efficiently compute the partition function. We propose two new algorithms which
are used to construct the SCTF and prove the correctness of both. The first is
an algorithm for incremental construction of CTFs that is guaranteed to give a
valid CTF with bounded clique sizes and the second is an approximation
algorithm that takes a calibrated CTF as input and yields a valid and
calibrated CTF with reduced clique sizes as the output. We have evaluated our
method using several benchmark sets from recent UAI competitions and our
results show good accuracies with competitive runtimes.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Amortized SHAP values via sparse Fourier function approximation [38.818224762845624]
SHAP values are a popular local feature-attribution method widely used in interpretable and explainable AI.
We propose a two-stage approach for estimating SHAP values.
Our algorithm's first step harnesses recent results showing that many real-world predictors have a spectral bias.
arXiv Detail & Related papers (2024-10-08T19:05:50Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Approximate inference of marginals using the IBIA framework [0.0]
Exact inference of marginals in probabilistic graphical models (PGM) is known to be intractable.
We propose a new algorithm for marginal inference that is based on the incremental build-infer-approximate (IBIA) paradigm.
Our method gives either better or comparable accuracy than existing variational and sampling based methods, with smaller runtimes.
arXiv Detail & Related papers (2023-06-01T04:24:21Z) - Efficient Approximate Kernel Based Spike Sequence Classification [56.2938724367661]
Machine learning models, such as SVM, require a definition of distance/similarity between pairs of sequences.
Exact methods yield better classification performance, but they pose high computational costs.
We propose a series of ways to improve the performance of the approximate kernel in order to enhance its predictive performance.
arXiv Detail & Related papers (2022-09-11T22:44:19Z) - MPE inference using an Incremental Build-Infer-Approximate Paradigm [0.0]
Exact inference of the most probable explanation (MPE) in Bayesian networks is known to be NP-complete.
We propose an algorithm for approximate MPE inference that is based on the incremental build-infer-approximate framework.
The accuracy of our solution is comparable to a branch and bound search in majority of the benchmarks, with competitive run times.
arXiv Detail & Related papers (2022-06-04T09:37:44Z) - IBIA: Bayesian Inference via Incremental Build-Infer-Approximate
operations on Clique Trees [0.0]
We propose an alternative approach for approximate inference based on an incremental build-infer-approximate paradigm.
We show that our algorithm for incremental construction of clique trees always generates a valid CT and our approximation technique automatically maintains consistency of within-clique beliefs.
arXiv Detail & Related papers (2022-02-24T10:30:31Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Fast and Accurate Neural CRF Constituency Parsing [16.90190521285297]
This work presents a fast and accurate neural CRF constituency computation.
We batchify the inside algorithm for loss by direct large tensor operations on GPU, and avoid the outside algorithm for computation via efficient back-propagation.
Experiments on PTB, CTB5.1, and CTB7 show that our two-stage CRF achieves new state-of-the-art performance on both settings of w/o and w/ BERT.
arXiv Detail & Related papers (2020-08-09T14:38:48Z) - Refined bounds for algorithm configuration: The knife-edge of dual class
approximability [94.83809668933021]
We investigate how large should a training set be to ensure that a parameter's average metrics performance over the training set is close to its expected, future performance.
We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds.
We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science.
arXiv Detail & Related papers (2020-06-21T15:32:21Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.