Causal network learning with non-invertible functional relationships
- URL: http://arxiv.org/abs/2004.09646v1
- Date: Mon, 20 Apr 2020 21:32:05 GMT
- Title: Causal network learning with non-invertible functional relationships
- Authors: Bingling Wang and Qing Zhou
- Abstract summary: Several recent results have established the identifiability of causal DAGs with non-Gaussian and/or nonlinear structural equation models (SEMs)
In this paper, we focus on nonlinear SEMs defined by non-invertible functions, which exist in many data domains.
We develop a method to incorporate this test in structure learning of DAGs that contain both linear and nonlinear causal relations.
- Score: 7.845605663563046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discovery of causal relationships from observational data is an important
problem in many areas. Several recent results have established the
identifiability of causal DAGs with non-Gaussian and/or nonlinear structural
equation models (SEMs). In this paper, we focus on nonlinear SEMs defined by
non-invertible functions, which exist in many data domains, and propose a novel
test for non-invertible bivariate causal models. We further develop a method to
incorporate this test in structure learning of DAGs that contain both linear
and nonlinear causal relations. By extensive numerical comparisons, we show
that our algorithms outperform existing DAG learning methods in identifying
causal graphical structures. We illustrate the practical application of our
method in learning causal networks for combinatorial binding of transcription
factors from ChIP-Seq data.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Directed Cyclic Graph for Causal Discovery from Multivariate Functional
Data [15.26007975367927]
We introduce a functional linear structural equation model for causal structure learning.
To enhance interpretability, our model involves a low-dimensional causal embedded space.
We prove that the proposed model is causally identifiable under standard assumptions.
arXiv Detail & Related papers (2023-10-31T15:19:24Z) - Causal disentanglement of multimodal data [1.589226862328831]
We introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships.
Our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key features in a fully unsupervised setting.
arXiv Detail & Related papers (2023-10-27T20:30:11Z) - Heteroscedastic Causal Structure Learning [2.566492438263125]
We tackle the heteroscedastic causal structure learning problem under Gaussian noises.
By exploiting the normality of the causal mechanisms, we can recover a valid causal ordering.
The result is HOST (Heteroscedastic causal STructure learning), a simple yet effective causal structure learning algorithm.
arXiv Detail & Related papers (2023-07-16T07:53:16Z) - Discovering Dynamic Causal Space for DAG Structure Learning [64.763763417533]
We propose a dynamic causal space for DAG structure learning, coined CASPER.
It integrates the graph structure into the score function as a new measure in the causal space to faithfully reflect the causal distance between estimated and ground truth DAG.
arXiv Detail & Related papers (2023-06-05T12:20:40Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Rank-Based Causal Discovery for Post-Nonlinear Models [2.4493299476776778]
Post-nonlinear (PNL) causal models constitute one of the most flexible options for such restricted subclasses.
We propose a new approach for PNL causal discovery that uses rank-based methods to estimate the functional parameters.
arXiv Detail & Related papers (2023-02-23T21:19:23Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - BCDAG: An R package for Bayesian structure and Causal learning of
Gaussian DAGs [77.34726150561087]
We introduce the R package for causal discovery and causal effect estimation from observational data.
Our implementation scales efficiently with the number of observations and, whenever the DAGs are sufficiently sparse, the number of variables in the dataset.
We then illustrate the main functions and algorithms on both real and simulated datasets.
arXiv Detail & Related papers (2022-01-28T09:30:32Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.