Causality Learning With Wasserstein Generative Adversarial Networks
- URL: http://arxiv.org/abs/2206.01496v1
- Date: Fri, 3 Jun 2022 10:45:47 GMT
- Title: Causality Learning With Wasserstein Generative Adversarial Networks
- Authors: Hristo Petkov, Colin Hanley and Feng Dong
- Abstract summary: A model named DAG-WGAN combines the Wasserstein-based adversarial loss with an acyclicity constraint in an auto-encoder architecture.
It simultaneously learns causal structures while improving its data generation capability.
We compare the performance of DAG-WGAN with other models that do not involve the Wasserstein metric in order to identify its contribution to causal structure learning.
- Score: 2.492300648514129
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional methods for causal structure learning from data face significant
challenges due to combinatorial search space. Recently, the problem has been
formulated into a continuous optimization framework with an acyclicity
constraint to learn Directed Acyclic Graphs (DAGs). Such a framework allows the
utilization of deep generative models for causal structure learning to better
capture the relations between data sample distributions and DAGs. However, so
far no study has experimented with the use of Wasserstein distance in the
context of causal structure learning. Our model named DAG-WGAN combines the
Wasserstein-based adversarial loss with an acyclicity constraint in an
auto-encoder architecture. It simultaneously learns causal structures while
improving its data generation capability. We compare the performance of
DAG-WGAN with other models that do not involve the Wasserstein metric in order
to identify its contribution to causal structure learning. Our model performs
better with high cardinality data according to our experiments.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Tree Search in DAG Space with Model-based Reinforcement Learning for
Causal Discovery [6.772856304452474]
CD-UCT is a model-based reinforcement learning method for causal discovery based on tree search.
We formalize and prove the correctness of an efficient algorithm for excluding edges that would introduce cycles.
The proposed method can be applied broadly to causal Bayesian networks with both discrete and continuous random variables.
arXiv Detail & Related papers (2023-10-20T15:14:18Z) - Heteroscedastic Causal Structure Learning [2.566492438263125]
We tackle the heteroscedastic causal structure learning problem under Gaussian noises.
By exploiting the normality of the causal mechanisms, we can recover a valid causal ordering.
The result is HOST (Heteroscedastic causal STructure learning), a simple yet effective causal structure learning algorithm.
arXiv Detail & Related papers (2023-07-16T07:53:16Z) - Discovering Dynamic Causal Space for DAG Structure Learning [64.763763417533]
We propose a dynamic causal space for DAG structure learning, coined CASPER.
It integrates the graph structure into the score function as a new measure in the causal space to faithfully reflect the causal distance between estimated and ground truth DAG.
arXiv Detail & Related papers (2023-06-05T12:20:40Z) - Directed Acyclic Graph Structure Learning from Dynamic Graphs [44.21230819336437]
Estimating the structure of directed acyclic graphs (DAGs) of features (variables) plays a vital role in revealing the latent data generation process.
We study the learning problem of node feature generation mechanism on such ubiquitous dynamic graph data.
arXiv Detail & Related papers (2022-11-30T14:22:01Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - DAG-WGAN: Causal Structure Learning With Wasserstein Generative
Adversarial Networks [2.492300648514129]
This paper proposes DAG-WGAN, which combines the Wasserstein-based adversarial loss, an auto-encoder architecture together with an acyclicity constraint.
It simultaneously learns causal structures and improves its data generation capability by leveraging the strength from the Wasserstein distance metric.
Our experiments have evaluated DAG-WGAN against the state-of-the-art and demonstrated its good performance.
arXiv Detail & Related papers (2022-04-01T12:27:27Z) - BCDAG: An R package for Bayesian structure and Causal learning of
Gaussian DAGs [77.34726150561087]
We introduce the R package for causal discovery and causal effect estimation from observational data.
Our implementation scales efficiently with the number of observations and, whenever the DAGs are sufficiently sparse, the number of variables in the dataset.
We then illustrate the main functions and algorithms on both real and simulated datasets.
arXiv Detail & Related papers (2022-01-28T09:30:32Z) - Multi-task Learning of Order-Consistent Causal Graphs [59.9575145128345]
We consider the problem of discovering $K related Gaussian acyclic graphs (DAGs)
Under multi-task learning setting, we propose a $l_1/l$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models.
We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order.
arXiv Detail & Related papers (2021-11-03T22:10:18Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.