Independence Testing-Based Approach to Causal Discovery under
Measurement Error and Linear Non-Gaussian Models
- URL: http://arxiv.org/abs/2210.11021v1
- Date: Thu, 20 Oct 2022 05:10:37 GMT
- Title: Independence Testing-Based Approach to Causal Discovery under
Measurement Error and Linear Non-Gaussian Models
- Authors: Haoyue Dai, Peter Spirtes, Kun Zhang
- Abstract summary: Causal discovery aims to recover causal structures generating observational data.
We consider a specific formulation of the problem, where the unobserved target variables follow a linear non-Gaussian acyclic model.
We propose the Transformed Independent Noise condition, which checks for independence between a specific linear transformation of some measured variables and certain other measured variables.
- Score: 9.016435298155827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal discovery aims to recover causal structures generating the
observational data. Despite its success in certain problems, in many real-world
scenarios the observed variables are not the target variables of interest, but
the imperfect measures of the target variables. Causal discovery under
measurement error aims to recover the causal graph among unobserved target
variables from observations made with measurement error. We consider a specific
formulation of the problem, where the unobserved target variables follow a
linear non-Gaussian acyclic model, and the measurement process follows the
random measurement error model. Existing methods on this formulation rely on
non-scalable over-complete independent component analysis (OICA). In this work,
we propose the Transformed Independent Noise (TIN) condition, which checks for
independence between a specific linear transformation of some measured
variables and certain other measured variables. By leveraging the
non-Gaussianity and higher-order statistics of data, TIN is informative about
the graph structure among the unobserved target variables. By utilizing TIN,
the ordered group decomposition of the causal model is identifiable. In other
words, we could achieve what once required OICA to achieve by only conducting
independence tests. Experimental results on both synthetic and real-world data
demonstrate the effectiveness and reliability of our method.
Related papers
- Predicting perturbation targets with causal differential networks [23.568795598997376]
We use an amortized causal discovery model to infer causal graphs from the observational and interventional datasets.
We learn to map these paired graphs to the sets of variables that were intervened upon, in a supervised learning framework.
This approach consistently outperforms baselines for perturbation modeling on seven single-cell transcriptomics datasets.
arXiv Detail & Related papers (2024-10-04T12:48:21Z) - Structural restrictions in local causal discovery: identifying direct causes of a target variable [0.9208007322096533]
Learning a set of direct causes of a target variable from an observational joint distribution is a fundamental problem in science.
Here, we are only interested in identifying the direct causes of one target variable, not the full DAG.
This allows us to relax the identifiability assumptions and develop possibly faster and more robust algorithms.
arXiv Detail & Related papers (2023-07-29T18:31:35Z) - Identifiable causal inference with noisy treatment and no side information [6.432072145009342]
This study proposes a model that assumes a continuous treatment variable that is inaccurately measured.
We prove that our model's causal effect estimates are identifiable, even without side information and knowledge of the measurement error variance.
Our work extends the range of applications in which reliable causal inference can be conducted.
arXiv Detail & Related papers (2023-06-18T18:38:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Variance Minimization in the Wasserstein Space for Invariant Causal
Prediction [72.13445677280792]
In this work, we show that the approach taken in ICP may be reformulated as a series of nonparametric tests that scales linearly in the number of predictors.
Each of these tests relies on the minimization of a novel loss function that is derived from tools in optimal transport theory.
We prove under mild assumptions that our method is able to recover the set of identifiable direct causes, and we demonstrate in our experiments that it is competitive with other benchmark causal discovery algorithms.
arXiv Detail & Related papers (2021-10-13T22:30:47Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Discovery of Causal Additive Models in the Presence of Unobserved
Variables [6.670414650224422]
Causal discovery from data affected by unobserved variables is an important but difficult problem to solve.
We propose a method to identify all the causal relationships that are theoretically possible to identify without being biased by unobserved variables.
arXiv Detail & Related papers (2021-06-04T03:28:27Z) - Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding [101.35070661471124]
We show that unobserved confounding leaves a characteristic footprint in the observed data distribution that allows for disentangling spurious and causal effects.
We propose an adjusted score-based causal discovery algorithm that may be implemented with general-purpose solvers and scales to high-dimensional problems.
arXiv Detail & Related papers (2021-03-28T11:07:59Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.