Identification of Causal Structure in the Presence of Missing Data with
Additive Noise Model
- URL: http://arxiv.org/abs/2312.12206v1
- Date: Tue, 19 Dec 2023 14:44:26 GMT
- Title: Identification of Causal Structure in the Presence of Missing Data with
Additive Noise Model
- Authors: Jie Qiao, Zhengming Chen, Jianhua Yu, Ruichu Cai, Zhifeng Hao
- Abstract summary: We find that the recent advances additive noise model has the potential for learning causal structure under the existence of self-masking missingness.
We propose a practical algorithm based on the above theoretical results on learning the causal skeleton and causal direction.
- Score: 24.755511829867398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Missing data are an unavoidable complication frequently encountered in many
causal discovery tasks. When a missing process depends on the missing values
themselves (known as self-masking missingness), the recovery of the joint
distribution becomes unattainable, and detecting the presence of such
self-masking missingness remains a perplexing challenge. Consequently, due to
the inability to reconstruct the original distribution and to discern the
underlying missingness mechanism, simply applying existing causal discovery
methods would lead to wrong conclusions. In this work, we found that the recent
advances additive noise model has the potential for learning causal structure
under the existence of the self-masking missingness. With this observation, we
aim to investigate the identification problem of learning causal structure from
missing data under an additive noise model with different missingness
mechanisms, where the `no self-masking missingness' assumption can be
eliminated appropriately. Specifically, we first elegantly extend the scope of
identifiability of causal skeleton to the case with weak self-masking
missingness (i.e., no other variable could be the cause of self-masking
indicators except itself). We further provide the sufficient and necessary
identification conditions of the causal direction under additive noise model
and show that the causal structure can be identified up to an IN-equivalent
pattern. We finally propose a practical algorithm based on the above
theoretical results on learning the causal skeleton and causal direction.
Extensive experiments on synthetic and real data demonstrate the efficiency and
effectiveness of the proposed algorithms.
Related papers
- Identifiability Guarantees for Causal Disentanglement from Purely Observational Data [10.482728002416348]
Causal disentanglement aims to learn about latent causal factors behind data.
Recent advances establish identifiability results assuming that interventions on (single) latent factors are available.
We provide a precise characterization of latent factors that can be identified in nonlinear causal models.
arXiv Detail & Related papers (2024-10-31T04:18:29Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Causal Discovery in Linear Latent Variable Models Subject to Measurement
Error [29.78435955758185]
We focus on causal discovery in the presence of measurement error in linear systems.
We demonstrate a surprising connection between this problem and causal discovery in the presence of unobserved parentless causes.
arXiv Detail & Related papers (2022-11-08T03:43:14Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - MissDAG: Causal Discovery in the Presence of Missing Data with
Continuous Additive Noise Models [78.72682320019737]
We develop a general method, which we call MissDAG, to perform causal discovery from data with incomplete observations.
MissDAG maximizes the expected likelihood of the visible part of observations under the expectation-maximization framework.
We demonstrate the flexibility of MissDAG for incorporating various causal discovery algorithms and its efficacy through extensive simulations and real data experiments.
arXiv Detail & Related papers (2022-05-27T09:59:46Z) - Causal Discovery in Linear Structural Causal Models with Deterministic
Relations [27.06618125828978]
We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
arXiv Detail & Related papers (2021-10-30T21:32:42Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - A Critical View of the Structural Causal Model [89.43277111586258]
We show that one can identify the cause and the effect without considering their interaction at all.
We propose a new adversarial training method that mimics the disentangled structure of the causal model.
Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.
arXiv Detail & Related papers (2020-02-23T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.