On Learning Necessary and Sufficient Causal Graphs
- URL: http://arxiv.org/abs/2301.12389v2
- Date: Wed, 1 Nov 2023 07:47:00 GMT
- Title: On Learning Necessary and Sufficient Causal Graphs
- Authors: Hengrui Cai, Yixin Wang, Michael Jordan, Rui Song
- Abstract summary: In practice, only a small subset of variables in the graph are relevant to the outcomes of interest.
We propose learning a class of necessary and sufficient causal graphs (NSCG) that exclusively comprises causally relevant variables for an outcome of interest.
We develop a necessary and sufficient causal structural learning (NSCSL) algorithm, by establishing theoretical properties and relationships between probabilities of causation and natural causal effects of features.
- Score: 29.339455706346193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The causal revolution has stimulated interest in understanding complex
relationships in various fields. Most of the existing methods aim to discover
causal relationships among all variables within a complex large-scale graph.
However, in practice, only a small subset of variables in the graph are
relevant to the outcomes of interest. Consequently, causal estimation with the
full causal graph -- particularly given limited data -- could lead to numerous
falsely discovered, spurious variables that exhibit high correlation with, but
exert no causal impact on, the target outcome. In this paper, we propose
learning a class of necessary and sufficient causal graphs (NSCG) that
exclusively comprises causally relevant variables for an outcome of interest,
which we term causal features. The key idea is to employ probabilities of
causation to systematically evaluate the importance of features in the causal
graph, allowing us to identify a subgraph relevant to the outcome of interest.
To learn NSCG from data, we develop a necessary and sufficient causal
structural learning (NSCSL) algorithm, by establishing theoretical properties
and relationships between probabilities of causation and natural causal effects
of features. Across empirical studies of simulated and real data, we
demonstrate that NSCSL outperforms existing algorithms and can reveal crucial
yeast genes for target heritable traits of interest.
Related papers
- CORE: Towards Scalable and Efficient Causal Discovery with Reinforcement
Learning [2.7446241148152253]
CORE is a reinforcement learning-based approach for causal discovery and intervention planning.
Our results demonstrate that CORE generalizes to unseen graphs and efficiently uncovers causal structures.
CORE scales to larger graphs with up to 10 variables and outperforms existing approaches in structure estimation accuracy and sample efficiency.
arXiv Detail & Related papers (2024-01-30T12:57:52Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - A Meta-Reinforcement Learning Algorithm for Causal Discovery [3.4806267677524896]
Causal structures can enable models to go beyond pure correlation-based inference.
Finding causal structures from data poses a significant challenge both in computational effort and accuracy.
We develop a meta-reinforcement learning algorithm that performs causal discovery by learning to perform interventions.
arXiv Detail & Related papers (2022-07-18T09:26:07Z) - Large-Scale Differentiable Causal Discovery of Factor Graphs [3.8015092217142223]
We introduce the notion of factor directed acyclic graphs (f-DAGs) as a way to the search space to non-linear low-rank causal interaction models.
We propose a scalable implementation of f-DAG constrained causal discovery for high-dimensional interventional data.
arXiv Detail & Related papers (2022-06-15T21:28:36Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Trying to Outrun Causality with Machine Learning: Limitations of Model
Explainability Techniques for Identifying Predictive Variables [7.106986689736828]
We show that machine learning algorithms are not as flexible as they might seem, and are instead incredibly sensitive to the underling causal structure in the data.
We provide some alternative recommendations for researchers wanting to explore the data for important variables.
arXiv Detail & Related papers (2022-02-20T17:48:54Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.