MPE inference using an Incremental Build-Infer-Approximate Paradigm
- URL: http://arxiv.org/abs/2206.01954v1
- Date: Sat, 4 Jun 2022 09:37:44 GMT
- Title: MPE inference using an Incremental Build-Infer-Approximate Paradigm
- Authors: Shivani Bathla and Vinita Vasudevan
- Abstract summary: Exact inference of the most probable explanation (MPE) in Bayesian networks is known to be NP-complete.
We propose an algorithm for approximate MPE inference that is based on the incremental build-infer-approximate framework.
The accuracy of our solution is comparable to a branch and bound search in majority of the benchmarks, with competitive run times.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Exact inference of the most probable explanation (MPE) in Bayesian networks
is known to be NP-complete. In this paper, we propose an algorithm for
approximate MPE inference that is based on the incremental
build-infer-approximate (IBIA) framework. We use this framework to obtain an
ordered set of partitions of the Bayesian network and the corresponding
max-calibrated clique trees. We show that the maximum belief in the last
partition gives an estimate of the probability of the MPE assignment. We
propose an iterative algorithm for decoding, in which the subset of variables
for which an assignment is obtained is guaranteed to increase in every
iteration. There are no issues of convergence, and we do not perform a search
for solutions. Even though it is a single shot algorithm, we obtain valid
assignments in 100 out of the 117 benchmarks used for testing. The accuracy of
our solution is comparable to a branch and bound search in majority of the
benchmarks, with competitive run times.
Related papers
- IBIA: An Incremental Build-Infer-Approximate Framework for Approximate
Inference of Partition Function [0.0]
Exact computation of the partition function is known to be intractable.
We propose a novel incremental build-infer-approximate framework for approximate inference.
We show that the framework can be used to efficiently compute the partition function.
arXiv Detail & Related papers (2023-04-13T09:40:23Z) - Selection of the Most Probable Best [2.1095005405219815]
We consider an expected-value ranking and selection (R&S) problem where all k solutions' simulation outputs depend on a common parameter whose uncertainty can be modeled by a distribution.
We define the most probable best (MPB) to be the solution that has the largest probability of being optimal with respect to the distribution.
We devise a series of algorithms that replace the unknown means in the optimality conditions with their estimates and prove the algorithms' sampling ratios achieve the conditions as the simulation budget increases.
arXiv Detail & Related papers (2022-07-15T15:27:27Z) - Efficient One Sided Kolmogorov Approximation [7.657378889055477]
The main application that we examine is estimation of the probability missing deadlines in series-parallel schedules.
Since exact computation of these probabilities is NP-hard, we propose to use the algorithms described in this paper to obtain an approximation.
arXiv Detail & Related papers (2022-07-14T10:03:02Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Parallel Sampling for Efficient High-dimensional Bayesian Network
Structure Learning [6.85316573653194]
This paper describes an approximate algorithm that performs parallel sampling on Candidate Parent Sets (CPSs)
The modified algorithm, which we call Parallel Sampling MINOBS (PS-MINOBS), constructs the graph by sampling CPSs for each variable.
arXiv Detail & Related papers (2022-02-19T22:35:59Z) - Optimal Clustering with Bandit Feedback [57.672609011609886]
This paper considers the problem of online clustering with bandit feedback.
It includes a novel stopping rule for sequential testing that circumvents the need to solve any NP-hard weighted clustering problem as its subroutines.
We show through extensive simulations on synthetic and real-world datasets that BOC's performance matches the lower boundally, and significantly outperforms a non-adaptive baseline algorithm.
arXiv Detail & Related papers (2022-02-09T06:05:05Z) - Mean-based Best Arm Identification in Stochastic Bandits under Reward
Contamination [80.53485617514707]
This paper proposes two algorithms, a gap-based algorithm and one based on the successive elimination, for best arm identification in sub-Gaussian bandits.
Specifically, for the gap-based algorithm, the sample complexity is optimal up to constant factors, while for the successive elimination, it is optimal up to logarithmic factors.
arXiv Detail & Related papers (2021-11-14T21:49:58Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - Best-First Beam Search [78.71330480725668]
We show that the standard implementation of beam search can be made up to 10x faster in practice.
We propose a memory-reduced variant of Best-First Beam Search, which has a similar beneficial search bias in terms of downstream performance.
arXiv Detail & Related papers (2020-07-08T05:56:01Z) - Certifying clusters from sum-of-norms clustering [13.747619681451875]
We present a clustering test that identifies and certifies the correct cluster assignment from an approximate solution.
We show the correct cluster assignment is guaranteed to be certified by a primal-dual path following algorithm after sufficiently many iterations.
arXiv Detail & Related papers (2020-06-19T20:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.