Bayesian network structure learning with causal effects in the presence
of latent variables
- URL: http://arxiv.org/abs/2005.14381v2
- Date: Tue, 18 Aug 2020 06:17:56 GMT
- Title: Bayesian network structure learning with causal effects in the presence
of latent variables
- Authors: Kiattikun Chobtham, Anthony C. Constantinou
- Abstract summary: This paper describes a hybrid structure learning algorithm, called CCHM, which combines the constraint-based part of cFCI with score-based learning.
Experiments based on both randomised and well-known networks show that CCHM improves the state-of-the-art in terms of reconstructing the true ancestral graph.
- Score: 6.85316573653194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent variables may lead to spurious relationships that can be
misinterpreted as causal relationships. In Bayesian Networks (BNs), this
challenge is known as learning under causal insufficiency. Structure learning
algorithms that assume causal insufficiency tend to reconstruct the ancestral
graph of a BN, where bi-directed edges represent confounding and directed edges
represent direct or ancestral relationships. This paper describes a hybrid
structure learning algorithm, called CCHM, which combines the constraint-based
part of cFCI with hill-climbing score-based learning. The score-based process
incorporates Pearl s do-calculus to measure causal effects and orientate edges
that would otherwise remain undirected, under the assumption the BN is a linear
Structure Equation Model where data follow a multivariate Gaussian
distribution. Experiments based on both randomised and well-known networks show
that CCHM improves the state-of-the-art in terms of reconstructing the true
ancestral graph.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Consistency of Neural Causal Partial Identification [17.503562318576414]
Recent progress in Causal Models showcased how identification and partial identification of causal effects can be automatically carried out via neural generative models.
We prove consistency of partial identification via NCMs in a general setting with both continuous and categorical variables.
Results highlight the impact of the design of the underlying neural network architecture in terms of depth and connectivity.
arXiv Detail & Related papers (2024-05-24T16:12:39Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - DynGFN: Towards Bayesian Inference of Gene Regulatory Networks with
GFlowNets [81.75973217676986]
Gene regulatory networks (GRN) describe interactions between genes and their products that control gene expression and cellular function.
Existing methods either focus on challenge (1), identifying cyclic structure from dynamics, or on challenge (2) learning complex Bayesian posteriors over DAGs, but not both.
In this paper we leverage the fact that it is possible to estimate the "velocity" of gene expression with RNA velocity techniques to develop an approach that addresses both challenges.
arXiv Detail & Related papers (2023-02-08T16:36:40Z) - Structure Learning and Parameter Estimation for Graphical Models via
Penalized Maximum Likelihood Methods [0.0]
In the thesis, we consider two different types of PGMs: Bayesian networks (BNs) which are static, and continuous time Bayesian networks which, as the name suggests, have a temporal component.
We are interested in recovering their true structure, which is the first step in learning any PGM.
arXiv Detail & Related papers (2023-01-30T20:26:13Z) - Hybrid Bayesian network discovery with latent variables by scoring
multiple interventions [5.994412766684843]
We present the hybrid mFGS-BS (majority rule and Fast Greedy equivalence Search with Bayesian Scoring) algorithm for structure learning from discrete data.
The algorithm assumes causal insufficiency in the presence of latent variables and produces a Partial Ancestral Graph (PAG)
Experimental results show that mFGS-BS improves structure learning accuracy relative to the state-of-the-art and it is computationally efficient.
arXiv Detail & Related papers (2021-12-20T14:54:41Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Structure Learning for Directed Trees [3.1523578265982235]
Knowing the causal structure of a system is of fundamental interest in many areas of science and can aid the design of prediction algorithms that work well under manipulations to the system.
To learn the structure from data, score-based methods evaluate different graphs according to the quality of their fits.
For large nonlinear models, these rely on optimization approaches with no general guarantees of recovering the true causal structure.
arXiv Detail & Related papers (2021-08-19T18:38:30Z) - Prequential MDL for Causal Structure Learning with Neural Networks [9.669269791955012]
We show that the prequential minimum description length principle can be used to derive a practical scoring function for Bayesian networks.
We obtain plausible and parsimonious graph structures without relying on sparsity inducing priors or other regularizers which must be tuned.
We discuss how the the prequential score relates to recent work that infers causal structure from the speed of adaptation when the observations come from a source undergoing distributional shift.
arXiv Detail & Related papers (2021-07-02T22:35:21Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.