Discovering Dynamic Causal Space for DAG Structure Learning
- URL: http://arxiv.org/abs/2306.02822v3
- Date: Mon, 11 Dec 2023 15:08:06 GMT
- Title: Discovering Dynamic Causal Space for DAG Structure Learning
- Authors: Fangfu Liu, Wenchang Ma, An Zhang, Xiang Wang, Yueqi Duan, Tat-Seng
Chua
- Abstract summary: We propose a dynamic causal space for DAG structure learning, coined CASPER.
It integrates the graph structure into the score function as a new measure in the causal space to faithfully reflect the causal distance between estimated and ground truth DAG.
- Score: 64.763763417533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discovering causal structure from purely observational data (i.e., causal
discovery), aiming to identify causal relationships among variables, is a
fundamental task in machine learning. The recent invention of differentiable
score-based DAG learners is a crucial enabler, which reframes the combinatorial
optimization problem into a differentiable optimization with a DAG constraint
over directed graph space. Despite their great success, these cutting-edge DAG
learners incorporate DAG-ness independent score functions to evaluate the
directed graph candidates, lacking in considering graph structure. As a result,
measuring the data fitness alone regardless of DAG-ness inevitably leads to
discovering suboptimal DAGs and model vulnerabilities. Towards this end, we
propose a dynamic causal space for DAG structure learning, coined CASPER, that
integrates the graph structure into the score function as a new measure in the
causal space to faithfully reflect the causal distance between estimated and
ground truth DAG. CASPER revises the learning process as well as enhances the
DAG structure learning via adaptive attention to DAG-ness. Grounded by
empirical visualization, CASPER, as a space, satisfies a series of desired
properties, such as structure awareness and noise robustness. Extensive
experiments on both synthetic and real-world datasets clearly validate the
superiority of our CASPER over the state-of-the-art causal discovery methods in
terms of accuracy and robustness.
Related papers
- Introducing Diminutive Causal Structure into Graph Representation Learning [19.132025125620274]
We introduce a novel method that enables Graph Neural Networks (GNNs) to glean insights from specialized diminutive causal structures.
Our method specifically extracts causal knowledge from the model representation of these diminutive causal structures.
arXiv Detail & Related papers (2024-06-13T00:18:20Z) - Causality Learning With Wasserstein Generative Adversarial Networks [2.492300648514129]
A model named DAG-WGAN combines the Wasserstein-based adversarial loss with an acyclicity constraint in an auto-encoder architecture.
It simultaneously learns causal structures while improving its data generation capability.
We compare the performance of DAG-WGAN with other models that do not involve the Wasserstein metric in order to identify its contribution to causal structure learning.
arXiv Detail & Related papers (2022-06-03T10:45:47Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - DAG-WGAN: Causal Structure Learning With Wasserstein Generative
Adversarial Networks [2.492300648514129]
This paper proposes DAG-WGAN, which combines the Wasserstein-based adversarial loss, an auto-encoder architecture together with an acyclicity constraint.
It simultaneously learns causal structures and improves its data generation capability by leveraging the strength from the Wasserstein distance metric.
Our experiments have evaluated DAG-WGAN against the state-of-the-art and demonstrated its good performance.
arXiv Detail & Related papers (2022-04-01T12:27:27Z) - BCDAG: An R package for Bayesian structure and Causal learning of
Gaussian DAGs [77.34726150561087]
We introduce the R package for causal discovery and causal effect estimation from observational data.
Our implementation scales efficiently with the number of observations and, whenever the DAGs are sufficiently sparse, the number of variables in the dataset.
We then illustrate the main functions and algorithms on both real and simulated datasets.
arXiv Detail & Related papers (2022-01-28T09:30:32Z) - Federated Causal Discovery [74.37739054932733]
This paper develops a gradient-based learning framework named DAG-Shared Federated Causal Discovery (DS-FCD)
It can learn the causal graph without directly touching local data and naturally handle the data heterogeneity.
Extensive experiments on both synthetic and real-world datasets verify the efficacy of the proposed method.
arXiv Detail & Related papers (2021-12-07T08:04:12Z) - BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery [97.79015388276483]
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG)
Recent advances enabled effective maximum-likelihood point estimation of DAGs from observational data.
We propose BCD Nets, a variational framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.
arXiv Detail & Related papers (2021-12-06T03:35:21Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - On the Role of Sparsity and DAG Constraints for Learning Linear DAGs [16.97675762810828]
We study the role of sparsity and DAG constraints for learning DAG models in the linear Gaussian and non-Gaussian cases.
We propose a likelihood-based score function, and show that one only has to apply soft sparsity and DAG constraints to learn a DAG equivalent to the ground truth DAG.
arXiv Detail & Related papers (2020-06-17T23:43:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.