Reactive Message Passing for Scalable Bayesian Inference
- URL: http://arxiv.org/abs/2112.13251v1
- Date: Sat, 25 Dec 2021 15:38:06 GMT
- Title: Reactive Message Passing for Scalable Bayesian Inference
- Authors: Dmitry Bagaev and Bert de Vries
- Abstract summary: We introduce Reactive Message Passing (RMP) as a framework for executing schedule-free, robust and scalable message passing-based inference.
RMP is based on the reactive programming style that only describes how nodes in a factor graph react to changes in connected nodes.
We also present ReactiveMP.jl, which is a Julia package for realizing RMP through minimization of a constrained Bethe free energy.
- Score: 5.238864474148862
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce Reactive Message Passing (RMP) as a framework for executing
schedule-free, robust and scalable message passing-based inference in a factor
graph representation of a probabilistic model. RMP is based on the reactive
programming style that only describes how nodes in a factor graph react to
changes in connected nodes. The absence of a fixed message passing schedule
improves robustness, scalability and execution time of the inference procedure.
We also present ReactiveMP.jl, which is a Julia package for realizing RMP
through minimization of a constrained Bethe free energy. By user-defined
specification of local form and factorization constraints on the variational
posterior distribution, ReactiveMP.jl executes hybrid message passing
algorithms including belief propagation, variational message passing,
expectation propagation, and expectation maximisation update rules.
Experimental results demonstrate the improved performance of ReactiveMP-based
RMP in comparison to other Julia packages for Bayesian inference across a range
of probabilistic models. In particular, we show that the RMP framework is able
to run Bayesian inference for large-scale probabilistic state space models with
hundreds of thousands of random variables on a standard laptop computer.
Related papers
- Simplification of Risk Averse POMDPs with Performance Guarantees [6.129902017281406]
Risk averse decision making under uncertainty in partially observable domains is a fundamental problem in AI and essential for reliable autonomous agents.
In our case, the problem is modeled using partially observable Markov decision processes (POMDPs), when the value function is the conditional value at risk (CVaR) of the return.
Calculating an optimal solution for POMDPs is computationally intractable in general.
We develop a simplification framework to speedup the evaluation of the value function, while providing performance guarantees.
arXiv Detail & Related papers (2024-06-05T07:05:52Z) - Training and Inference on Any-Order Autoregressive Models the Right Way [97.39464776373902]
A family of Any-Order Autoregressive Models (AO-ARMs) has shown breakthrough performance in arbitrary conditional tasks.
We identify significant improvements to be made to previous formulations of AO-ARMs.
Our method leads to improved performance with no compromises on tractability.
arXiv Detail & Related papers (2022-05-26T18:00:02Z) - Distributional Gradient Boosting Machines [77.34726150561087]
Our framework is based on XGBoost and LightGBM.
We show that our framework achieves state-of-the-art forecast accuracy.
arXiv Detail & Related papers (2022-04-02T06:32:19Z) - GFlowNet Foundations [66.69854262276391]
Generative Flow Networks (GFlowNets) have been introduced as a method to sample a diverse set of candidates in an active learning context.
We show a number of additional theoretical properties of GFlowNets.
arXiv Detail & Related papers (2021-11-17T17:59:54Z) - A visual introduction to Gaussian Belief Propagation [22.02770204949673]
We present a visual introduction to the approximate probabilistic inference algorithm that operates by passing messages between the nodes of arbitrarily structured factor graphs.
A special case of loopy belief propagation, GBP updates rely only on local information and will converge independently of the message schedule.
Our key argument is that, given recent trends in computing hardware, GBP has the right computational properties to act as a scalable distributed probabilistic inference framework for future machine learning systems.
arXiv Detail & Related papers (2021-07-05T22:43:27Z) - Sparse Bayesian Learning via Stepwise Regression [1.2691047660244335]
We propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP)
As its noise variance parameter goes to zero, RMP exhibits a surprising connection to Stepwise Regression.
We derive novel guarantees for Stepwise Regression algorithms, which also shed light on RMP.
arXiv Detail & Related papers (2021-06-11T00:20:27Z) - Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic
Regression [51.770998056563094]
Probabilistic Gradient Boosting Machines (PGBM) is a method to create probabilistic predictions with a single ensemble of decision trees.
We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-03T08:32:13Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Accelerating Metropolis-Hastings with Lightweight Inference Compilation [1.2633299843878945]
Inference Compilation (LIC) implements amortized inference within an open-universe probabilistic programming language.
LIC forgoes importance sampling of linear execution traces in favor of operating directly on Bayesian networks.
Experimental results show LIC can produce proposers which have less parameters, greater robustness to nuisance random variables, and improved posterior sampling.
arXiv Detail & Related papers (2020-10-23T02:05:37Z) - Polynomial-Time Exact MAP Inference on Discrete Models with Global
Dependencies [83.05591911173332]
junction tree algorithm is the most general solution for exact MAP inference with run-time guarantees.
We propose a new graph transformation technique via node cloning which ensures a run-time for solving our target problem independently of the form of a corresponding clique tree.
arXiv Detail & Related papers (2019-12-27T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.