$\alpha$ Belief Propagation for Approximate Inference
- URL: http://arxiv.org/abs/2006.15363v1
- Date: Sat, 27 Jun 2020 13:32:06 GMT
- Title: $\alpha$ Belief Propagation for Approximate Inference
- Authors: Dong Liu, Minh Th\`anh Vu, Zuxing Li, and Lars K. Rasmussen
- Abstract summary: Belief propagation (BP) algorithm is a widely used message-passing method for inference in graphical models.
We derive an interpretable belief propagation algorithm that is motivated by minimization of a localized $alpha$-divergence.
It turns out that $alpha$-BP generalizes standard BP.
- Score: 16.258304175469917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Belief propagation (BP) algorithm is a widely used message-passing method for
inference in graphical models. BP on loop-free graphs converges in linear time.
But for graphs with loops, BP's performance is uncertain, and the understanding
of its solution is limited. To gain a better understanding of BP in general
graphs, we derive an interpretable belief propagation algorithm that is
motivated by minimization of a localized $\alpha$-divergence. We term this
algorithm as $\alpha$ belief propagation ($\alpha$-BP). It turns out that
$\alpha$-BP generalizes standard BP. In addition, this work studies the
convergence properties of $\alpha$-BP. We prove and offer the convergence
conditions for $\alpha$-BP. Experimental simulations on random graphs validate
our theoretical results. The application of $\alpha$-BP to practical problems
is also demonstrated.
Related papers
- Circular Belief Propagation for Approximate Probabilistic Inference [0.07282584715927627]
Belief Propagation (BP) is a simple probabilistic inference algorithm, consisting of passing messages between nodes of a graph representing a probability distribution.
We propose Circular Belief Propagation (CBP), an extension of BP which limits the detrimental effects of message reverberation caused by cycles by learning to detect and cancel spurious correlations and belief amplifications.
We show in numerical experiments involving binary probabilistic graphs that CBP far outperforms BP and reaches good performance compared to that of previously proposed algorithms.
arXiv Detail & Related papers (2024-03-17T15:59:39Z) - Learning Broadcast Protocols [1.9336815376402723]
We look for the first time at the problem of learning a distributed system with an arbitrary number of processes.
We consider fine broadcast protocols, these are broadcast protocols with a finite cutoff and no hidden states.
We provide a learning algorithm that can infer a correct BP from a sample that is consistent with a fine BP, and a minimal equivalent BP if the sample is sufficiently complete.
arXiv Detail & Related papers (2023-06-25T16:26:48Z) - Detection of Dense Subhypergraphs by Low-Degree Polynomials [72.4451045270967]
Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem.
We consider detecting the presence of a planted $Gr(ngamma, n-alpha)$ subhypergraph in a $Gr(n, n-beta) hypergraph.
Our results are already new in the graph case $r=2$, as we consider the subtle log-density regime where hardness based on average-case reductions is not known.
arXiv Detail & Related papers (2023-04-17T10:38:08Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Hardness of Maximum Likelihood Learning of DPPs [25.06251462216571]
We prove Kulesza's conjecture that the problem of finding a maximum likelihood DPP model for a given data set is NP-complete.
In terms of techniques, we reduce approxing the maximum log-likelihood of DPPs on a data set to solving a gap of instance a "vector" problem on a hypergraph.
arXiv Detail & Related papers (2022-05-24T21:46:23Z) - Towards Painless Policy Optimization for Constrained MDPs [46.12526917024248]
We study policy optimization in an infinite horizon, $gamma$-discounted constrained Markov decision process (CMDP)
Our objective is to return a policy that achieves large expected reward with a small constraint violation.
We propose a generic primal-dual framework that allows us to bound the reward sub-optimality and constraint violation for arbitrary algorithms.
arXiv Detail & Related papers (2022-04-11T15:08:09Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Belief Propagation Neural Networks [103.97004780313105]
We introduce belief propagation neural networks (BPNNs)
BPNNs operate on factor graphs and generalize Belief propagation (BP)
We show that BPNNs converges 1.7x faster on Ising models while providing tighter bounds.
On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods.
arXiv Detail & Related papers (2020-07-01T07:39:51Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z) - Fixed-Support Wasserstein Barycenters: Computational Hardness and Fast
Algorithm [100.11971836788437]
We study the fixed-support Wasserstein barycenter problem (FS-WBP)
We develop a provably fast textitdeterministic variant of the celebrated iterative Bregman projection (IBP) algorithm, named textscFastIBP.
arXiv Detail & Related papers (2020-02-12T03:40:52Z) - Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree [20.408693108187542]
We study the overfitting solution that minimizes the $ell$-norm, known as Basis Pursuit (BP) in the compressed sensing literature.
We show that for a large range of $p$ up to a limit that grows exponentially with the number of samples $n$, with high probability the model error of BP is upper bounded by a value that decreases with $p$.
arXiv Detail & Related papers (2020-02-02T20:48:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.