Correcting Confounding via Random Selection of Background Variables
- URL: http://arxiv.org/abs/2202.02150v1
- Date: Fri, 4 Feb 2022 14:27:10 GMT
- Title: Correcting Confounding via Random Selection of Background Variables
- Authors: You-Lin Chen, Lenon Minorics, Dominik Janzing
- Abstract summary: We propose a novel criterion for identifying causal relationship based on the stability of regression coefficients of X on Y.
We prove, subject to a symmetry assumption for the background influence, that V converges to zero if and only if X contains no causal drivers.
- Score: 15.206717158865022
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose a method to distinguish causal influence from hidden confounding
in the following scenario: given a target variable Y, potential causal drivers
X, and a large number of background features, we propose a novel criterion for
identifying causal relationship based on the stability of regression
coefficients of X on Y with respect to selecting different background features.
To this end, we propose a statistic V measuring the coefficient's variability.
We prove, subject to a symmetry assumption for the background influence, that V
converges to zero if and only if X contains no causal drivers. In experiments
with simulated data, the method outperforms state of the art algorithms.
Further, we report encouraging results for real-world data. Our approach aligns
with the general belief that causal insights admit better generalization of
statistical associations across environments, and justifies similar existing
heuristic approaches from the literature.
Related papers
- Causal Graph Learning via Distributional Invariance of Cause-Effect Relationship [54.575090553659074]
We develop an algorithm that efficiently uncovers causal relationships with quadratic complexity in the number of observational variables.<n>Our experiments on a varied benchmark of large-scale datasets show superior or equivalent performance compared to existing works.
arXiv Detail & Related papers (2026-02-03T10:26:16Z) - Data Fusion for Partial Identification of Causal Effects [62.56890808004615]
We propose a novel partial identification framework that enables researchers to answer key questions.<n>Is the causal effect positive or negative? and How severe must assumption violations be to overturn this conclusion?<n>We apply our framework to the Project STAR study, which investigates the effect of classroom size on students' third-grade standardized test performance.
arXiv Detail & Related papers (2025-05-30T07:13:01Z) - A Sparsity Principle for Partially Observable Causal Representation Learning [28.25303444099773]
Causal representation learning aims at identifying high-level causal variables from perceptual data.
We focus on learning from unpaired observations from a dataset with an instance-dependent partial observability pattern.
We propose two methods for estimating the underlying causal variables by enforcing sparsity in the inferred representation.
arXiv Detail & Related papers (2024-03-13T08:40:49Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Causal Discovery via Conditional Independence Testing with Proxy Variables [35.3493980628004]
The presence of unobserved variables, such as the latent confounder, can introduce bias in conditional independence testing.
We propose a novel hypothesis-testing procedure that can effectively examine the existence of the causal relationship over continuous variables.
arXiv Detail & Related papers (2023-05-09T09:08:39Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Variance Minimization in the Wasserstein Space for Invariant Causal
Prediction [72.13445677280792]
In this work, we show that the approach taken in ICP may be reformulated as a series of nonparametric tests that scales linearly in the number of predictors.
Each of these tests relies on the minimization of a novel loss function that is derived from tools in optimal transport theory.
We prove under mild assumptions that our method is able to recover the set of identifiable direct causes, and we demonstrate in our experiments that it is competitive with other benchmark causal discovery algorithms.
arXiv Detail & Related papers (2021-10-13T22:30:47Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - Reparametrization Invariance in non-parametric Causal Discovery [0.0]
Causal discovery estimates the underlying physical process that generates the observed data.
This study investigates one such invariant: the causal relationship between X and Y is invariant to the marginal distributions of X and Y.
arXiv Detail & Related papers (2020-08-12T20:00:47Z) - Information-Theoretic Approximation to Causal Models [0.0]
We show that it is possible to solve the problem of inferring the causal direction and causal effect between two random variables from a finite sample.
We embed distributions that originate from samples of X and Y into a higher dimensional probability space.
We show that this information-theoretic approximation to causal models (IACM) can be done by solving a linear optimization problem.
arXiv Detail & Related papers (2020-07-29T18:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.