Causal Discovery with Multi-Domain LiNGAM for Latent Factors
- URL: http://arxiv.org/abs/2009.09176v3
- Date: Sat, 23 Apr 2022 03:49:35 GMT
- Title: Causal Discovery with Multi-Domain LiNGAM for Latent Factors
- Authors: Yan Zeng, Shohei Shimizu, Ruichu Cai, Feng Xie, Michio Yamamoto,
Zhifeng Hao
- Abstract summary: We propose Multi-Domain Linear Non-Gaussian Acyclic Models for Latent Factors (MD-LiNA), where the causal structure among latent factors of interest is shared for all domains.
We show that the proposed method provides locally consistent estimators.
- Score: 30.9081158491659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discovering causal structures among latent factors from observed data is a
particularly challenging problem. Despite some efforts for this problem,
existing methods focus on the single-domain data only. In this paper, we
propose Multi-Domain Linear Non-Gaussian Acyclic Models for Latent Factors
(MD-LiNA), where the causal structure among latent factors of interest is
shared for all domains, and we provide its identification results. The model
enriches the causal representation for multi-domain data. We propose an
integrated two-phase algorithm to estimate the model. In particular, we first
locate the latent factors and estimate the factor loading matrix. Then to
uncover the causal structure among shared latent factors of interest, we derive
a score function based on the characterization of independence relations
between external influences and the dependence relations between multi-domain
latent factors and latent factors of interest. We show that the proposed method
provides locally consistent estimators. Experimental results on both synthetic
and real-world data demonstrate the efficacy and robustness of our approach.
Related papers
- C-Disentanglement: Discovering Causally-Independent Generative Factors
under an Inductive Bias of Confounder [35.09708249850816]
We introduce a framework entitled Confounded-Disentanglement (C-Disentanglement), the first framework that explicitly introduces the inductive bias of confounder.
We conduct extensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-10-26T11:44:42Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Interventional Causal Representation Learning [75.18055152115586]
Causal representation learning seeks to extract high-level latent factors from low-level sensory data.
Can interventional data facilitate causal representation learning?
We show that interventional data often carries geometric signatures of the latent factors' support.
arXiv Detail & Related papers (2022-09-24T04:59:03Z) - Causality Inspired Representation Learning for Domain Generalization [47.574964496891404]
We introduce a general structural causal model to formalize the Domain generalization problem.
Our goal is to extract the causal factors from inputs and then reconstruct the invariant causal mechanisms.
We highlight that ideal causal factors should meet three basic properties: separated from the non-causal ones, jointly independent, and causally sufficient for the classification.
arXiv Detail & Related papers (2022-03-27T08:08:33Z) - Causal Discovery in Linear Structural Causal Models with Deterministic
Relations [27.06618125828978]
We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
arXiv Detail & Related papers (2021-10-30T21:32:42Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Causal Inference in Geoscience and Remote Sensing from Observational
Data [9.800027003240674]
We try to estimate the correct direction of causation using a finite set of empirical data.
We illustrate performance in a collection of 28 geoscience causal inference problems.
The criterion achieves state-of-the-art detection rates in all cases, it is generally robust to noise sources and distortions.
arXiv Detail & Related papers (2020-12-07T22:56:55Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - Causal Inference in Possibly Nonlinear Factor Models [2.0305676256390934]
This paper develops a general causal inference method for treatment effects models with noisily measured confounders.
The main building block is a local principal subspace approximation procedure that combines $K$-nearest neighbors matching and principal component analysis.
Results are illustrated with an empirical application studying the effect of political connections on stock returns of financial firms.
arXiv Detail & Related papers (2020-08-31T14:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.