Characterization and Learning of Causal Graphs from Hard Interventions
- URL: http://arxiv.org/abs/2505.01037v1
- Date: Fri, 02 May 2025 06:20:08 GMT
- Title: Characterization and Learning of Causal Graphs from Hard Interventions
- Authors: Zihan Zhou, Muhammad Qasim Elahi, Murat Kocaoglu,
- Abstract summary: Causal discovery involves linking conditional independence (CI) invariances in observational data to corresponding graphical constraints via d-separation.<n>We propose a set of graphical constraints that are fundamentally linked to Pearl's do-calculus within the framework of hard interventions.<n>We also propose a learning algorithm to integrate multiple datasets from hard interventions, introducing new rules.
- Score: 6.823521786512908
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental challenge in the empirical sciences involves uncovering causal structure through observation and experimentation. Causal discovery entails linking the conditional independence (CI) invariances in observational data to their corresponding graphical constraints via d-separation. In this paper, we consider a general setting where we have access to data from multiple experimental distributions resulting from hard interventions, as well as potentially from an observational distribution. By comparing different interventional distributions, we propose a set of graphical constraints that are fundamentally linked to Pearl's do-calculus within the framework of hard interventions. These graphical constraints associate each graphical structure with a set of interventional distributions that are consistent with the rules of do-calculus. We characterize the interventional equivalence class of causal graphs with latent variables and introduce a graphical representation that can be used to determine whether two causal graphs are interventionally equivalent, i.e., whether they are associated with the same family of hard interventional distributions, where the elements of the family are indistinguishable using the invariances from do-calculus. We also propose a learning algorithm to integrate multiple datasets from hard interventions, introducing new orientation rules. The learning objective is a tuple of augmented graphs which entails a set of causal graphs. We also prove the soundness of the proposed algorithm.
Related papers
- Estimating Interventional Distributions with Uncertain Causal Graphs through Meta-Learning [26.3914014514629]
In scientific domains -- from biology to the social sciences -- many questions boil down to textitWhat effect will we observe if we intervene on a particular variable?<n>We propose using meta-learning to create an end-to-end model: the Model-Averaged Causal Estimation Transformer Neural Process (MACE-TNP)<n>Our work establishes meta-learning as a flexible and scalable paradigm for approximating complex Bayesian causal inference.
arXiv Detail & Related papers (2025-07-07T22:48:32Z) - Sample Efficient Bayesian Learning of Causal Graphs from Interventions [6.823521786512908]
This study considers a Bayesian approach for learning causal graphs with limited interventional samples.
We show theoretically that our proposed algorithm will return the true causal graph with high probability.
We present a case study showing how this algorithm could be modified to answer more general causal questions without learning the whole graph.
arXiv Detail & Related papers (2024-10-26T05:47:56Z) - Adaptive Online Experimental Design for Causal Discovery [9.447864414136905]
Causal discovery aims to uncover cause-and-effect relationships encoded in causal graphs.
We focus on data interventional efficiency and formalize causal discovery from the perspective of online learning.
We propose a track-and-stop causal discovery algorithm that adaptively selects interventions from the graph separating system.
arXiv Detail & Related papers (2024-05-19T13:26:33Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - A continuous Structural Intervention Distance to compare Causal Graphs [5.477914707166288]
The distance is based on embedding intervention distributions over each pair of nodes.
We show theoretical results which we validate with numerical experiments on synthetic data.
arXiv Detail & Related papers (2023-07-31T07:20:26Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Invariance Principle Meets Out-of-Distribution Generalization on Graphs [66.04137805277632]
Complex nature of graphs thwarts the adoption of the invariance principle for OOD generalization.
domain or environment partitions, which are often required by OOD methods, can be expensive to obtain for graphs.
We propose a novel framework to explicitly model this process using a contrastive strategy.
arXiv Detail & Related papers (2022-02-11T04:38:39Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Hyperbolic Graph Embedding with Enhanced Semi-Implicit Variational
Inference [48.63194907060615]
We build off of semi-implicit graph variational auto-encoders to capture higher-order statistics in a low-dimensional graph latent representation.
We incorporate hyperbolic geometry in the latent space through a Poincare embedding to efficiently represent graphs exhibiting hierarchical structure.
arXiv Detail & Related papers (2020-10-31T05:48:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.