Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding
- URL: http://arxiv.org/abs/2103.15106v1
- Date: Sun, 28 Mar 2021 11:07:59 GMT
- Title: Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding
- Authors: Alexis Bellot, Mihaela van der Schaar
- Abstract summary: We show that unobserved confounding leaves a characteristic footprint in the observed data distribution that allows for disentangling spurious and causal effects.
We propose an adjusted score-based causal discovery algorithm that may be implemented with general-purpose solvers and scales to high-dimensional problems.
- Score: 101.35070661471124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unobserved confounding is one of the greatest challenges for causal
discovery. The case in which unobserved variables have a potentially widespread
effect on many of the observed ones is particularly difficult because most
pairs of variables are conditionally dependent given any other subset. In this
paper, we show that beyond conditional independencies, unobserved confounding
in this setting leaves a characteristic footprint in the observed data
distribution that allows for disentangling spurious and causal effects. Using
this insight, we demonstrate that a sparse linear Gaussian directed acyclic
graph among observed variables may be recovered approximately and propose an
adjusted score-based causal discovery algorithm that may be implemented with
general-purpose solvers and scales to high-dimensional problems. We find, in
addition, that despite the conditions we pose to guarantee causal recovery,
performance in practice is robust to large deviations in model assumptions.
Related papers
- Score matching through the roof: linear, nonlinear, and latent variables causal discovery [18.46845413928147]
Causal discovery from observational data holds great promise.
Existing methods rely on strong assumptions about the underlying causal structure.
We propose a flexible algorithm for causal discovery across linear, nonlinear, and latent variable models.
arXiv Detail & Related papers (2024-07-26T14:09:06Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Identification of Causal Structure with Latent Variables Based on Higher
Order Cumulants [31.85295338809117]
We propose a novel approach to identify the existence of a causal edge between two observed variables subject to latent variable influence.
In case when such a causal edge exits, we introduce an asymmetry criterion to determine the causal direction.
arXiv Detail & Related papers (2023-12-19T08:20:19Z) - A Versatile Causal Discovery Framework to Allow Causally-Related Hidden
Variables [28.51579090194802]
We introduce a novel framework for causal discovery that accommodates the presence of causally-related hidden variables almost everywhere in the causal network.
We develop a Rank-based Latent Causal Discovery algorithm, RLCD, that can efficiently locate hidden variables, determine their cardinalities, and discover the entire causal structure over both measured and hidden ones.
Experimental results on both synthetic and real-world personality data sets demonstrate the efficacy of the proposed approach in finite-sample cases.
arXiv Detail & Related papers (2023-12-18T07:57:39Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Large deviations rates for stochastic gradient descent with strongly
convex functions [11.247580943940916]
We provide a formal framework for the study of general high probability bounds with gradient descent.
We find an upper large deviations bound for SGD with strongly convex functions.
arXiv Detail & Related papers (2022-11-02T09:15:26Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Exploiting Independent Instruments: Identification and Distribution
Generalization [3.701112941066256]
We exploit the independence for distribution generalization by taking into account higher moments.
We prove that the proposed estimator is invariant to distributional shifts on the instruments.
These results hold even in the under-identified case where the instruments are not sufficiently rich to identify the causal function.
arXiv Detail & Related papers (2022-02-03T21:49:04Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.