Hybrid Bayesian network discovery with latent variables by scoring
multiple interventions
- URL: http://arxiv.org/abs/2112.10574v1
- Date: Mon, 20 Dec 2021 14:54:41 GMT
- Title: Hybrid Bayesian network discovery with latent variables by scoring
multiple interventions
- Authors: Kiattikun Chobtham, Anthony C. Constantinou, Neville K. Kitson
- Abstract summary: We present the hybrid mFGS-BS (majority rule and Fast Greedy equivalence Search with Bayesian Scoring) algorithm for structure learning from discrete data.
The algorithm assumes causal insufficiency in the presence of latent variables and produces a Partial Ancestral Graph (PAG)
Experimental results show that mFGS-BS improves structure learning accuracy relative to the state-of-the-art and it is computationally efficient.
- Score: 5.994412766684843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Bayesian Networks (BNs), the direction of edges is crucial for causal
reasoning and inference. However, Markov equivalence class considerations mean
it is not always possible to establish edge orientations, which is why many BN
structure learning algorithms cannot orientate all edges from purely
observational data. Moreover, latent confounders can lead to false positive
edges. Relatively few methods have been proposed to address these issues. In
this work, we present the hybrid mFGS-BS (majority rule and Fast Greedy
equivalence Search with Bayesian Scoring) algorithm for structure learning from
discrete data that involves an observational data set and one or more
interventional data sets. The algorithm assumes causal insufficiency in the
presence of latent variables and produces a Partial Ancestral Graph (PAG).
Structure learning relies on a hybrid approach and a novel Bayesian scoring
paradigm that calculates the posterior probability of each directed edge being
added to the learnt graph. Experimental results based on well-known networks of
up to 109 variables and 10k sample size show that mFGS-BS improves structure
learning accuracy relative to the state-of-the-art and it is computationally
efficient.
Related papers
- A Full DAG Score-Based Algorithm for Learning Causal Bayesian Networks with Latent Confounders [0.0]
Causal Bayesian networks (CBN) are popular graphical probabilistic models that encode causal relations among variables.
This paper introduces the first fully score-based structure learning algorithm searching the space of DAGs that is capable of identifying the presence of some latent confounders.
arXiv Detail & Related papers (2024-08-20T20:25:56Z) - Graph Structure Learning with Interpretable Bayesian Neural Networks [10.957528713294874]
We introduce novel iterations with independently interpretable parameters.
These parameters influence characteristics of the estimated graph, such as edge sparsity.
After unrolling these iterations, prior knowledge over such graph characteristics shape prior distributions.
Fast execution and parameter efficiency allow for high-fidelity posterior approximation.
arXiv Detail & Related papers (2024-06-20T23:27:41Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Bayesian Structure Learning with Generative Flow Networks [85.84396514570373]
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) from data.
Recently, a class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling.
We show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs.
arXiv Detail & Related papers (2022-02-28T15:53:10Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - A Sparse Structure Learning Algorithm for Bayesian Network
Identification from Discrete High-Dimensional Data [0.40611352512781856]
This paper addresses the problem of learning a sparse structure Bayesian network from high-dimensional discrete data.
We propose a score function that satisfies the sparsity and the DAG property simultaneously.
Specifically, we use a variance reducing method in our optimization algorithm to make the algorithm work efficiently in high-dimensional data.
arXiv Detail & Related papers (2021-08-21T12:21:01Z) - DiBS: Differentiable Bayesian Structure Learning [38.01659425023988]
We propose a general, fully differentiable framework for Bayesian structure learning (DiBS)
DiBS operates in the continuous space of a latent probabilistic graph representation.
Contrary to existing work, DiBS is agnostic to the form of the local conditional distributions.
arXiv Detail & Related papers (2021-05-25T11:23:08Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Bayesian network structure learning with causal effects in the presence
of latent variables [6.85316573653194]
This paper describes a hybrid structure learning algorithm, called CCHM, which combines the constraint-based part of cFCI with score-based learning.
Experiments based on both randomised and well-known networks show that CCHM improves the state-of-the-art in terms of reconstructing the true ancestral graph.
arXiv Detail & Related papers (2020-05-29T04:42:28Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.