Do-calculus enables causal reasoning with latent variable models
- URL: http://arxiv.org/abs/2102.06626v1
- Date: Fri, 12 Feb 2021 17:12:53 GMT
- Title: Do-calculus enables causal reasoning with latent variable models
- Authors: Sara Mohammad-Taheri and Robert Ness and Jeremy Zucker and Olga Vitek
- Abstract summary: Latent variable models (LVMs) are probabilistic models where some of the variables are hidden during training.
We show that causal reasoning can enhance a broad class of LVM long established in the probabilistic modeling community.
- Score: 2.294014185517203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent variable models (LVMs) are probabilistic models where some of the
variables are hidden during training. A broad class of LVMshave a directed
acyclic graphical structure. The directed structure suggests an intuitive
causal explanation of the data generating process. For example, a latent topic
model suggests that topics cause the occurrence of a token. Despite this
intuitive causal interpretation, a directed acyclic latent variable model
trained on data is generally insufficient for causal reasoning, as the required
model parameters may not be uniquely identified. In this manuscript we
demonstrate that an LVM can answer any causal query posed post-training,
provided that the query can be identified from the observed variables according
to the do-calculus rules. We show that causal reasoning can enhance a broad
class of LVM long established in the probabilistic modeling community, and
demonstrate its effectiveness on several case studies. These include a machine
learning model with multiple causes where there exists a set of latent
confounders and a mediator between the causes and the outcome variable, a study
where the identifiable causal query cannot be estimated using the front-door or
back-door criterion, a case study that captures unobserved crosstalk between
two biological signaling pathways, and a COVID-19 expert system that identifies
multiple causal queries.
Related papers
- Linear Causal Disentanglement via Interventions [8.444187296409051]
Causal disentanglement seeks a representation of data involving latent variables that relate to one another via a causal model.
We study observed variables that are a linear transformation of a linear latent causal model.
arXiv Detail & Related papers (2022-11-29T18:43:42Z) - Causal Discovery in Linear Latent Variable Models Subject to Measurement
Error [29.78435955758185]
We focus on causal discovery in the presence of measurement error in linear systems.
We demonstrate a surprising connection between this problem and causal discovery in the presence of unobserved parentless causes.
arXiv Detail & Related papers (2022-11-08T03:43:14Z) - Causal Discovery in Linear Structural Causal Models with Deterministic
Relations [27.06618125828978]
We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
arXiv Detail & Related papers (2021-10-30T21:32:42Z) - Typing assumptions improve identification in causal discovery [123.06886784834471]
Causal discovery from observational data is a challenging task to which an exact solution cannot always be identified.
We propose a new set of assumptions that constrain possible causal relationships based on the nature of the variables.
arXiv Detail & Related papers (2021-07-22T14:23:08Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z) - A Critical View of the Structural Causal Model [89.43277111586258]
We show that one can identify the cause and the effect without considering their interaction at all.
We propose a new adversarial training method that mimics the disentangled structure of the causal model.
Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.
arXiv Detail & Related papers (2020-02-23T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.