Local Causal Discovery with Linear non-Gaussian Cyclic Models
- URL: http://arxiv.org/abs/2403.14843v1
- Date: Thu, 21 Mar 2024 21:27:39 GMT
- Title: Local Causal Discovery with Linear non-Gaussian Cyclic Models
- Authors: Haoyue Dai, Ignavier Ng, Yujia Zheng, Zhengqing Gao, Kun Zhang,
- Abstract summary: We present a general, unified local causal discovery method with linear non-Gaussian models.
Our identifiability results are empirically validated using both synthetic and real-world datasets.
- Score: 17.59924947011467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local causal discovery is of great practical significance, as there are often situations where the discovery of the global causal structure is unnecessary, and the interest lies solely on a single target variable. Most existing local methods utilize conditional independence relations, providing only a partially directed graph, and assume acyclicity for the ground-truth structure, even though real-world scenarios often involve cycles like feedback mechanisms. In this work, we present a general, unified local causal discovery method with linear non-Gaussian models, whether they are cyclic or acyclic. We extend the application of independent component analysis from the global context to independent subspace analysis, enabling the exact identification of the equivalent local directed structures and causal strengths from the Markov blanket of the target variable. We also propose an alternative regression-based method in the particular acyclic scenarios. Our identifiability results are empirically validated using both synthetic and real-world datasets.
Related papers
- Hybrid Local Causal Discovery [23.329420595827273]
Local causal discovery aims to learn and distinguish the direct causes and effects of a target variable from observed data.
Existing constraint-based local causal discovery methods use AND or OR rules in constructing the local causal skeleton.
We propose a Hybrid Local Causal Discovery algorithm, called HLCD.
arXiv Detail & Related papers (2024-12-27T07:53:59Z) - SPARTAN: A Sparse Transformer Learning Local Causation [63.29645501232935]
Causal structures play a central role in world models that flexibly adapt to changes in the environment.
We present the SPARse TrANsformer World model (SPARTAN), a Transformer-based world model that learns local causal structures between entities in a scene.
By applying sparsity regularisation on the attention pattern between object-factored tokens, SPARTAN identifies sparse local causal models that accurately predict future object states.
arXiv Detail & Related papers (2024-11-11T11:42:48Z) - Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Detecting and Identifying Selection Structure in Sequential Data [53.24493902162797]
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences.
We show that selection structure is identifiable without any parametric assumptions or interventional experiments.
We also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies.
arXiv Detail & Related papers (2024-06-29T20:56:34Z) - Local Causal Structure Learning in the Presence of Latent Variables [16.88791886307876]
We present a principled method for determining whether a variable is a direct cause or effect of a target.
Experimental results on both synthetic and real-world data validate the effectiveness and efficiency of our approach.
arXiv Detail & Related papers (2024-05-25T13:31:05Z) - Structural restrictions in local causal discovery: identifying direct causes of a target variable [0.9208007322096533]
Learning a set of direct causes of a target variable from an observational joint distribution is a fundamental problem in science.
Here, we are only interested in identifying the direct causes of one target variable, not the full DAG.
This allows us to relax the identifiability assumptions and develop possibly faster and more robust algorithms.
arXiv Detail & Related papers (2023-07-29T18:31:35Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - A Local Method for Identifying Causal Relations under Markov Equivalence [7.904790547594697]
Causality is important for designing interpretable and robust methods in artificial intelligence research.
We propose a local approach to identify whether a variable is a cause of a given target based on causal graphical models of directed acyclic graphs (DAGs)
arXiv Detail & Related papers (2021-02-25T05:01:44Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Causal Inference in Geoscience and Remote Sensing from Observational
Data [9.800027003240674]
We try to estimate the correct direction of causation using a finite set of empirical data.
We illustrate performance in a collection of 28 geoscience causal inference problems.
The criterion achieves state-of-the-art detection rates in all cases, it is generally robust to noise sources and distortions.
arXiv Detail & Related papers (2020-12-07T22:56:55Z) - On Disentangled Representations Learned From Correlated Data [59.41587388303554]
We bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data.
We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations.
We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
arXiv Detail & Related papers (2020-06-14T12:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.