Latent Instrumental Variables as Priors in Causal Inference based on
Independence of Cause and Mechanism
- URL: http://arxiv.org/abs/2007.08812v1
- Date: Fri, 17 Jul 2020 08:18:19 GMT
- Title: Latent Instrumental Variables as Priors in Causal Inference based on
Independence of Cause and Mechanism
- Authors: Nataliya Sokolovska (SU), Pierre-Henri Wuillemin
- Abstract summary: We study the role of latent variables such as latent instrumental variables and hidden common causes in the causal graphical structures.
We derive a novel algorithm to infer causal relationships between two variables.
- Score: 2.28438857884398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal inference methods based on conditional independence construct Markov
equivalent graphs, and cannot be applied to bivariate cases. The approaches
based on independence of cause and mechanism state, on the contrary, that
causal discovery can be inferred for two observations. In our contribution, we
challenge to reconcile these two research directions. We study the role of
latent variables such as latent instrumental variables and hidden common causes
in the causal graphical structures. We show that the methods based on the
independence of cause and mechanism, indirectly contain traces of the existence
of the hidden instrumental variables. We derive a novel algorithm to infer
causal relationships between two variables, and we validate the proposed method
on simulated data and on a benchmark of cause-effect pairs. We illustrate by
our experiments that the proposed approach is simple and extremely competitive
in terms of empirical accuracy compared to the state-of-the-art methods.
Related papers
- Measuring the Reliability of Causal Probing Methods: Tradeoffs, Limitations, and the Plight of Nullifying Interventions [3.173096780177902]
Causal probing is an approach to interpreting foundation models, such as large language models.
We propose a general empirical analysis framework to evaluate the reliability of causal probing interventions.
arXiv Detail & Related papers (2024-08-28T03:45:49Z) - Score matching through the roof: linear, nonlinear, and latent variables causal discovery [18.46845413928147]
Causal discovery from observational data holds great promise.
Existing methods rely on strong assumptions about the underlying causal structure.
We propose a flexible algorithm for causal discovery across linear, nonlinear, and latent variable models.
arXiv Detail & Related papers (2024-07-26T14:09:06Z) - A Sparsity Principle for Partially Observable Causal Representation Learning [28.25303444099773]
Causal representation learning aims at identifying high-level causal variables from perceptual data.
We focus on learning from unpaired observations from a dataset with an instance-dependent partial observability pattern.
We propose two methods for estimating the underlying causal variables by enforcing sparsity in the inferred representation.
arXiv Detail & Related papers (2024-03-13T08:40:49Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Causal Inference in Geoscience and Remote Sensing from Observational
Data [9.800027003240674]
We try to estimate the correct direction of causation using a finite set of empirical data.
We illustrate performance in a collection of 28 geoscience causal inference problems.
The criterion achieves state-of-the-art detection rates in all cases, it is generally robust to noise sources and distortions.
arXiv Detail & Related papers (2020-12-07T22:56:55Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.