Typing assumptions improve identification in causal discovery
- URL: http://arxiv.org/abs/2107.10703v1
- Date: Thu, 22 Jul 2021 14:23:08 GMT
- Title: Typing assumptions improve identification in causal discovery
- Authors: Philippe Brouillard, Perouz Taslakian, Alexandre Lacoste, Sebastien
Lachapelle, Alexandre Drouin
- Abstract summary: Causal discovery from observational data is a challenging task to which an exact solution cannot always be identified.
We propose a new set of assumptions that constrain possible causal relationships based on the nature of the variables.
- Score: 123.06886784834471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal discovery from observational data is a challenging task to which an
exact solution cannot always be identified. Under assumptions about the
data-generative process, the causal graph can often be identified up to an
equivalence class. Proposing new realistic assumptions to circumscribe such
equivalence classes is an active field of research. In this work, we propose a
new set of assumptions that constrain possible causal relationships based on
the nature of the variables. We thus introduce typed directed acyclic graphs,
in which variable types are used to determine the validity of causal
relationships. We demonstrate, both theoretically and empirically, that the
proposed assumptions can result in significant gains in the identification of
the causal graph.
Related papers
- Demystifying amortized causal discovery with transformers [21.058343547918053]
Supervised learning approaches for causal discovery from observational data often achieve competitive performance.
In this work, we investigate CSIvA, a transformer-based model promising to train on synthetic data and transfer to real data.
We bridge the gap with existing identifiability theory and show that constraints on the training data distribution implicitly define a prior on the test observations.
arXiv Detail & Related papers (2024-05-27T08:17:49Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Towards Bounding Causal Effects under Markov Equivalence [13.050023008348388]
We consider the derivation of bounds on causal effects given only observational data.
We provide a systematic algorithm to derive bounds on causal effects that exploit the invariant properties of the equivalence class.
arXiv Detail & Related papers (2023-11-13T11:49:55Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [85.67870425656368]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Weight-variant Latent Causal Models [79.79711624326299]
Causal representation learning exposes latent high-level causal variables behind low-level observations.
In this work we focus on identifying latent causal variables.
We show that the transitivity severely hinders the identifiability of latent causal variables.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal variables.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Causal Discovery in Linear Structural Causal Models with Deterministic
Relations [27.06618125828978]
We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
arXiv Detail & Related papers (2021-10-30T21:32:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.