Reparametrization Invariance in non-parametric Causal Discovery
- URL: http://arxiv.org/abs/2008.05552v1
- Date: Wed, 12 Aug 2020 20:00:47 GMT
- Title: Reparametrization Invariance in non-parametric Causal Discovery
- Authors: Martin J{\o}rgensen and S{\o}ren Hauberg
- Abstract summary: Causal discovery estimates the underlying physical process that generates the observed data.
This study investigates one such invariant: the causal relationship between X and Y is invariant to the marginal distributions of X and Y.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal discovery estimates the underlying physical process that generates the
observed data: does X cause Y or does Y cause X? Current methodologies use
structural conditions to turn the causal query into a statistical query, when
only observational data is available. But what if these statistical queries are
sensitive to causal invariants? This study investigates one such invariant: the
causal relationship between X and Y is invariant to the marginal distributions
of X and Y. We propose an algorithm that uses a non-parametric estimator that
is robust to changes in the marginal distributions. This way we may marginalize
the marginals, and inspect what relationship is intrinsically there. The
resulting causal estimator is competitive with current methodologies and has
high emphasis on the uncertainty in the causal query; an aspect just as
important as the query itself.
Related papers
- Reinterpreting causal discovery as the task of predicting unobserved
joint statistics [15.088547731564782]
We argue that causal discovery can help inferring properties of the unobserved joint distributions'
We define a learning scenario where the input is a subset of variables and the label is some statistical property of that subset.
arXiv Detail & Related papers (2023-05-11T15:30:54Z) - Causal Discovery via Conditional Independence Testing with Proxy Variables [35.3493980628004]
The presence of unobserved variables, such as the latent confounder, can introduce bias in conditional independence testing.
We propose a novel hypothesis-testing procedure that can effectively examine the existence of the causal relationship over continuous variables.
arXiv Detail & Related papers (2023-05-09T09:08:39Z) - Equivariance Discovery by Learned Parameter-Sharing [153.41877129746223]
We study how to discover interpretable equivariances from data.
Specifically, we formulate this discovery process as an optimization problem over a model's parameter-sharing schemes.
Also, we theoretically analyze the method for Gaussian data and provide a bound on the mean squared gap between the studied discovery scheme and the oracle scheme.
arXiv Detail & Related papers (2022-04-07T17:59:19Z) - Correcting Confounding via Random Selection of Background Variables [15.206717158865022]
We propose a novel criterion for identifying causal relationship based on the stability of regression coefficients of X on Y.
We prove, subject to a symmetry assumption for the background influence, that V converges to zero if and only if X contains no causal drivers.
arXiv Detail & Related papers (2022-02-04T14:27:10Z) - Decoding Causality by Fictitious VAR Modeling [0.0]
We first set up an equilibrium for the cause-effect relations using a fictitious vector autoregressive model.
In the equilibrium, long-run relations are identified from noise, and spurious ones are negligibly close to zero.
We also apply the approach to estimating the causal factors' contribution to climate change.
arXiv Detail & Related papers (2021-11-14T22:43:02Z) - Variance Minimization in the Wasserstein Space for Invariant Causal
Prediction [72.13445677280792]
In this work, we show that the approach taken in ICP may be reformulated as a series of nonparametric tests that scales linearly in the number of predictors.
Each of these tests relies on the minimization of a novel loss function that is derived from tools in optimal transport theory.
We prove under mild assumptions that our method is able to recover the set of identifiable direct causes, and we demonstrate in our experiments that it is competitive with other benchmark causal discovery algorithms.
arXiv Detail & Related papers (2021-10-13T22:30:47Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Counterfactual Invariance to Spurious Correlations: Why and How to Pass
Stress Tests [87.60900567941428]
A spurious correlation' is the dependence of a model on some aspect of the input data that an analyst thinks shouldn't matter.
In machine learning, these have a know-it-when-you-see-it character.
We study stress testing using the tools of causal inference.
arXiv Detail & Related papers (2021-05-31T14:39:38Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - Information-Theoretic Approximation to Causal Models [0.0]
We show that it is possible to solve the problem of inferring the causal direction and causal effect between two random variables from a finite sample.
We embed distributions that originate from samples of X and Y into a higher dimensional probability space.
We show that this information-theoretic approximation to causal models (IACM) can be done by solving a linear optimization problem.
arXiv Detail & Related papers (2020-07-29T18:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.