Invariant Causal Prediction with Locally Linear Models
- URL: http://arxiv.org/abs/2401.05218v1
- Date: Wed, 10 Jan 2024 15:34:42 GMT
- Title: Invariant Causal Prediction with Locally Linear Models
- Authors: Alexander Mey, Rui Manuel Castro
- Abstract summary: We consider the task of identifying the causal parents of a target variable from observational data.
We introduce a practical method called LoLICaP, which is based on a hypothesis test for parent identification.
We show in a simplified setting that the statistical power of LoLICaP converges exponentially fast in the sample size.
- Score: 58.6243132840146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the task of identifying the causal parents of a target variable
among a set of candidate variables from observational data. Our main assumption
is that the candidate variables are observed in different environments which
may, for example, correspond to different settings of a machine or different
time intervals in a dynamical process. Under certain assumptions different
environments can be regarded as interventions on the observed system. We assume
a linear relationship between target and covariates, which can be different in
each environment with the only restriction that the causal structure is
invariant across environments. This is an extension of the ICP
($\textbf{I}$nvariant $\textbf{C}$ausal $\textbf{P}$rediction) principle by
Peters et al. [2016], who assumed a fixed linear relationship across all
environments. Within our proposed setting we provide sufficient conditions for
identifiability of the causal parents and introduce a practical method called
LoLICaP ($\textbf{Lo}$cally $\textbf{L}$inear $\textbf{I}$nvariant
$\textbf{Ca}$usal $\textbf{P}$rediction), which is based on a hypothesis test
for parent identification using a ratio of minimum and maximum statistics. We
then show in a simplified setting that the statistical power of LoLICaP
converges exponentially fast in the sample size, and finally we analyze the
behavior of LoLICaP experimentally in more general settings.
Related papers
- Reinterpreting causal discovery as the task of predicting unobserved
joint statistics [15.088547731564782]
We argue that causal discovery can help inferring properties of the unobserved joint distributions'
We define a learning scenario where the input is a subset of variables and the label is some statistical property of that subset.
arXiv Detail & Related papers (2023-05-11T15:30:54Z) - Out-of-Variable Generalization for Discriminative Models [13.075802230332298]
In machine learning, the ability of an agent to do well in new environments is a critical aspect of intelligence.
We investigate $textitout-of-variable$ generalization, which pertains to environments with variables that were never jointly observed before.
We propose a method that exhibits non-trivial out-of-variable generalization performance when facing an overlapping, yet distinct, set of causal predictors.
arXiv Detail & Related papers (2023-04-16T21:29:54Z) - Environment Invariant Linear Least Squares [18.387614531869826]
This paper considers a multi-environment linear regression model in which data from multiple experimental settings are collected.
We construct a novel environment invariant linear least squares (EILLS) objective function, a multi-environment version of linear least-squares regression.
arXiv Detail & Related papers (2023-03-06T13:10:54Z) - Differentiable Invariant Causal Discovery [106.87950048845308]
Learning causal structure from observational data is a fundamental challenge in machine learning.
This paper proposes Differentiable Invariant Causal Discovery (DICD) to avoid learning spurious edges and wrong causal directions.
Extensive experiments on synthetic and real-world datasets verify that DICD outperforms state-of-the-art causal discovery methods up to 36% in SHD.
arXiv Detail & Related papers (2022-05-31T09:29:07Z) - Invariant Ancestry Search [6.583725235299022]
We introduce the concept of minimal invariance and propose invariant ancestry search (IAS)
In its population version, IAS outputs a set which contains only ancestors of the response and is the output of ICP.
We develop scalable algorithms and perform experiments on simulated and real data.
arXiv Detail & Related papers (2022-02-02T08:28:00Z) - Variance Minimization in the Wasserstein Space for Invariant Causal
Prediction [72.13445677280792]
In this work, we show that the approach taken in ICP may be reformulated as a series of nonparametric tests that scales linearly in the number of predictors.
Each of these tests relies on the minimization of a novel loss function that is derived from tools in optimal transport theory.
We prove under mild assumptions that our method is able to recover the set of identifiable direct causes, and we demonstrate in our experiments that it is competitive with other benchmark causal discovery algorithms.
arXiv Detail & Related papers (2021-10-13T22:30:47Z) - Causal Order Identification to Address Confounding: Binary Variables [4.56877715768796]
This paper considers an extension of the linear non-Gaussian acyclic model (LiNGAM)
LiNGAM determines the causal order among variables from a dataset when the variables are expressed by a set of linear equations, including noise.
arXiv Detail & Related papers (2021-08-10T22:09:43Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z) - Stable Prediction via Leveraging Seed Variable [73.9770220107874]
Previous machine learning methods might exploit subtly spurious correlations in training data induced by non-causal variables for prediction.
We propose a conditional independence test based algorithm to separate causal variables with a seed variable as priori, and adopt them for stable prediction.
Our algorithm outperforms state-of-the-art methods for stable prediction.
arXiv Detail & Related papers (2020-06-09T06:56:31Z) - An Analysis of the Adaptation Speed of Causal Models [80.77896315374747]
Recently, Bengio et al. conjectured that among all candidate models, $G$ is the fastest to adapt from one dataset to another.
We investigate the adaptation speed of cause-effect SCMs using convergence rates from optimization.
Surprisingly, we find situations where the anticausal model is advantaged, falsifying the initial hypothesis.
arXiv Detail & Related papers (2020-05-18T23:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.