Identification of Causal Structure in the Presence of Missing Data with
Additive Noise Model
- URL: http://arxiv.org/abs/2312.12206v1
- Date: Tue, 19 Dec 2023 14:44:26 GMT
- Title: Identification of Causal Structure in the Presence of Missing Data with
Additive Noise Model
- Authors: Jie Qiao, Zhengming Chen, Jianhua Yu, Ruichu Cai, Zhifeng Hao
- Abstract summary: We find that the recent advances additive noise model has the potential for learning causal structure under the existence of self-masking missingness.
We propose a practical algorithm based on the above theoretical results on learning the causal skeleton and causal direction.
- Score: 24.755511829867398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Missing data are an unavoidable complication frequently encountered in many
causal discovery tasks. When a missing process depends on the missing values
themselves (known as self-masking missingness), the recovery of the joint
distribution becomes unattainable, and detecting the presence of such
self-masking missingness remains a perplexing challenge. Consequently, due to
the inability to reconstruct the original distribution and to discern the
underlying missingness mechanism, simply applying existing causal discovery
methods would lead to wrong conclusions. In this work, we found that the recent
advances additive noise model has the potential for learning causal structure
under the existence of the self-masking missingness. With this observation, we
aim to investigate the identification problem of learning causal structure from
missing data under an additive noise model with different missingness
mechanisms, where the `no self-masking missingness' assumption can be
eliminated appropriately. Specifically, we first elegantly extend the scope of
identifiability of causal skeleton to the case with weak self-masking
missingness (i.e., no other variable could be the cause of self-masking
indicators except itself). We further provide the sufficient and necessary
identification conditions of the causal direction under additive noise model
and show that the causal structure can be identified up to an IN-equivalent
pattern. We finally propose a practical algorithm based on the above
theoretical results on learning the causal skeleton and causal direction.
Extensive experiments on synthetic and real data demonstrate the efficiency and
effectiveness of the proposed algorithms.
Related papers
- Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - A Versatile Causal Discovery Framework to Allow Causally-Related Hidden
Variables [28.51579090194802]
We introduce a novel framework for causal discovery that accommodates the presence of causally-related hidden variables almost everywhere in the causal network.
We develop a Rank-based Latent Causal Discovery algorithm, RLCD, that can efficiently locate hidden variables, determine their cardinalities, and discover the entire causal structure over both measured and hidden ones.
Experimental results on both synthetic and real-world personality data sets demonstrate the efficacy of the proposed approach in finite-sample cases.
arXiv Detail & Related papers (2023-12-18T07:57:39Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [85.67870425656368]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal Discovery in Linear Latent Variable Models Subject to Measurement
Error [29.78435955758185]
We focus on causal discovery in the presence of measurement error in linear systems.
We demonstrate a surprising connection between this problem and causal discovery in the presence of unobserved parentless causes.
arXiv Detail & Related papers (2022-11-08T03:43:14Z) - MissDAG: Causal Discovery in the Presence of Missing Data with
Continuous Additive Noise Models [78.72682320019737]
We develop a general method, which we call MissDAG, to perform causal discovery from data with incomplete observations.
MissDAG maximizes the expected likelihood of the visible part of observations under the expectation-maximization framework.
We demonstrate the flexibility of MissDAG for incorporating various causal discovery algorithms and its efficacy through extensive simulations and real data experiments.
arXiv Detail & Related papers (2022-05-27T09:59:46Z) - Causal Discovery in Linear Structural Causal Models with Deterministic
Relations [27.06618125828978]
We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
arXiv Detail & Related papers (2021-10-30T21:32:42Z) - Identifying Causal Effects via Context-specific Independence Relations [9.51801023527378]
Causal effect identification considers whether an interventional probability distribution can be uniquely determined from a passively observed distribution.
We show that deciding causal effect non-identifiability is NP-hard in the presence of context-specific independence relations.
Motivated by this, we design a calculus and an automated search procedure for identifying causal effects in the presence of CSIs.
arXiv Detail & Related papers (2020-09-21T11:38:15Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - A Critical View of the Structural Causal Model [89.43277111586258]
We show that one can identify the cause and the effect without considering their interaction at all.
We propose a new adversarial training method that mimics the disentangled structure of the causal model.
Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.
arXiv Detail & Related papers (2020-02-23T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.