Bayesian structure learning and sampling of Bayesian networks with the R
package BiDAG
- URL: http://arxiv.org/abs/2105.00488v1
- Date: Sun, 2 May 2021 14:42:32 GMT
- Title: Bayesian structure learning and sampling of Bayesian networks with the R
package BiDAG
- Authors: Polina Suter and Jack Kuipers and Giusi Moffa and Niko Beerenwinkel
- Abstract summary: BiDAG implements Markov chain Monte Carlo (MCMC) methods for structure learning and sampling of Bayesian networks.
The package includes tools to search for a maximum a posteriori (MAP) graph and to sample graphs from the posterior distribution given the data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The R package BiDAG implements Markov chain Monte Carlo (MCMC) methods for
structure learning and sampling of Bayesian networks. The package includes
tools to search for a maximum a posteriori (MAP) graph and to sample graphs
from the posterior distribution given the data. A new hybrid approach to
structure learning enables inference in large graphs. In the first step, we
define a reduced search space by means of the PC algorithm or based on prior
knowledge. In the second step, an iterative order MCMC scheme proceeds to
optimize within the restricted search space and estimate the MAP graph.
Sampling from the posterior distribution is implemented using either order or
partition MCMC. The models and algorithms can handle both discrete and
continuous data. The BiDAG package also provides an implementation of MCMC
schemes for structure learning and sampling of dynamic Bayesian networks.
Related papers
- Sequential Monte Carlo Learning for Time Series Structure Discovery [17.964180907602657]
We present a novel structure learning algorithm that integrates sequential Monte Carlo and involutive MCMC for highly effective posterior inference.
Our method can be used both in "online" settings, where new data is incorporated sequentially in time, and in "offline" settings, by using nested subsets of historical data to anneal the posterior.
arXiv Detail & Related papers (2023-07-13T16:38:01Z) - A Bayesian Take on Gaussian Process Networks [1.7188280334580197]
This work implements Monte Carlo and Markov Chain Monte Carlo methods to sample from the posterior distribution of network structures.
We show that our method outperforms state-of-the-art algorithms in recovering the graphical structure of the network.
arXiv Detail & Related papers (2023-06-20T08:38:31Z) - Hierarchical clustering with dot products recovers hidden tree structure [53.68551192799585]
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance.
We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model.
arXiv Detail & Related papers (2023-05-24T11:05:12Z) - Bayesian Structure Learning with Generative Flow Networks [85.84396514570373]
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) from data.
Recently, a class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling.
We show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs.
arXiv Detail & Related papers (2022-02-28T15:53:10Z) - Parallel Sampling for Efficient High-dimensional Bayesian Network
Structure Learning [6.85316573653194]
This paper describes an approximate algorithm that performs parallel sampling on Candidate Parent Sets (CPSs)
The modified algorithm, which we call Parallel Sampling MINOBS (PS-MINOBS), constructs the graph by sampling CPSs for each variable.
arXiv Detail & Related papers (2022-02-19T22:35:59Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - MIxBN: library for learning Bayesian networks from mixed data [0.0]
This paper describes a new library for learning Bayesian networks from data containing discrete and continuous variables (mixed data)
It allows structural learning and parameters learning from mixed data without discretization since data discretization leads to information loss.
arXiv Detail & Related papers (2021-06-24T17:19:58Z) - CREPO: An Open Repository to Benchmark Credal Network Algorithms [78.79752265884109]
Credal networks are imprecise probabilistic graphical models based on, so-called credal, sets of probability mass functions.
A Java library called CREMA has been recently released to model, process and query credal networks.
We present CREPO, an open repository of synthetic credal networks, provided together with the exact results of inference tasks on these models.
arXiv Detail & Related papers (2021-05-10T07:31:59Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - Unsupervised Deep Cross-modality Spectral Hashing [65.3842441716661]
The framework is a two-step hashing approach which decouples the optimization into binary optimization and hashing function learning.
We propose a novel spectral embedding-based algorithm to simultaneously learn single-modality and binary cross-modality representations.
We leverage the powerful CNN for images and propose a CNN-based deep architecture to learn text modality.
arXiv Detail & Related papers (2020-08-01T09:20:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.