Independence Testing-Based Approach to Causal Discovery under
Measurement Error and Linear Non-Gaussian Models
- URL: http://arxiv.org/abs/2210.11021v1
- Date: Thu, 20 Oct 2022 05:10:37 GMT
- Title: Independence Testing-Based Approach to Causal Discovery under
Measurement Error and Linear Non-Gaussian Models
- Authors: Haoyue Dai, Peter Spirtes, Kun Zhang
- Abstract summary: Causal discovery aims to recover causal structures generating observational data.
We consider a specific formulation of the problem, where the unobserved target variables follow a linear non-Gaussian acyclic model.
We propose the Transformed Independent Noise condition, which checks for independence between a specific linear transformation of some measured variables and certain other measured variables.
- Score: 9.016435298155827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal discovery aims to recover causal structures generating the
observational data. Despite its success in certain problems, in many real-world
scenarios the observed variables are not the target variables of interest, but
the imperfect measures of the target variables. Causal discovery under
measurement error aims to recover the causal graph among unobserved target
variables from observations made with measurement error. We consider a specific
formulation of the problem, where the unobserved target variables follow a
linear non-Gaussian acyclic model, and the measurement process follows the
random measurement error model. Existing methods on this formulation rely on
non-scalable over-complete independent component analysis (OICA). In this work,
we propose the Transformed Independent Noise (TIN) condition, which checks for
independence between a specific linear transformation of some measured
variables and certain other measured variables. By leveraging the
non-Gaussianity and higher-order statistics of data, TIN is informative about
the graph structure among the unobserved target variables. By utilizing TIN,
the ordered group decomposition of the causal model is identifiable. In other
words, we could achieve what once required OICA to achieve by only conducting
independence tests. Experimental results on both synthetic and real-world data
demonstrate the effectiveness and reliability of our method.
Related papers
- Structural restrictions in local causal discovery: identifying direct
causes of a target variable [0.0]
We consider the problem of learning a set of direct causes of a target variable from an observational joint distribution.
We provide two practical algorithms for estimating the direct causes from a finite random sample and demonstrate their effectiveness on several benchmark datasets.
arXiv Detail & Related papers (2023-07-29T18:31:35Z) - Identifiable causal inference with noisy treatment and no side information [6.432072145009342]
We study a model that assumes a continuous treatment variable that is inaccurately measured.
We prove that our model's causal effect estimates are identifiable, even without knowledge of the measurement error variance or other side information.
Our work extends the range of applications in which reliable causal inference can be conducted.
arXiv Detail & Related papers (2023-06-18T18:38:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Weight-variant Latent Causal Models [79.79711624326299]
Causal representation learning exposes latent high-level causal variables behind low-level observations.
In this work we focus on identifying latent causal variables.
We show that the transitivity severely hinders the identifiability of latent causal variables.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal variables.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Causal Inference with Treatment Measurement Error: A Nonparametric
Instrumental Variable Approach [24.52459180982653]
We propose a kernel-based nonparametric estimator for the causal effect when the cause is corrupted by error.
We empirically show that our proposed method, MEKIV, improves over baselines and is robust under changes in the strength of measurement error.
arXiv Detail & Related papers (2022-06-18T11:47:25Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Discovery of Causal Additive Models in the Presence of Unobserved
Variables [6.670414650224422]
Causal discovery from data affected by unobserved variables is an important but difficult problem to solve.
We propose a method to identify all the causal relationships that are theoretically possible to identify without being biased by unobserved variables.
arXiv Detail & Related papers (2021-06-04T03:28:27Z) - Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding [101.35070661471124]
We show that unobserved confounding leaves a characteristic footprint in the observed data distribution that allows for disentangling spurious and causal effects.
We propose an adjusted score-based causal discovery algorithm that may be implemented with general-purpose solvers and scales to high-dimensional problems.
arXiv Detail & Related papers (2021-03-28T11:07:59Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.