Causal Autoregressive Flows
- URL: http://arxiv.org/abs/2011.02268v2
- Date: Wed, 24 Feb 2021 16:35:26 GMT
- Title: Causal Autoregressive Flows
- Authors: Ilyes Khemakhem, Ricardo Pio Monti, Robert Leech, Aapo Hyv\"arinen
- Abstract summary: We highlight an intrinsic correspondence between a simple family of autoregressive normalizing flows and identifiable causal models.
We exploit the fact that autoregressive flow architectures define an ordering over variables, analogous to a causal ordering, to show that they are well-suited to performing a range of causal inference tasks.
- Score: 4.731404257629232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Two apparently unrelated fields -- normalizing flows and causality -- have
recently received considerable attention in the machine learning community. In
this work, we highlight an intrinsic correspondence between a simple family of
autoregressive normalizing flows and identifiable causal models. We exploit the
fact that autoregressive flow architectures define an ordering over variables,
analogous to a causal ordering, to show that they are well-suited to performing
a range of causal inference tasks, ranging from causal discovery to making
interventional and counterfactual predictions. First, we show that causal
models derived from both affine and additive autoregressive flows with fixed
orderings over variables are identifiable, i.e. the true direction of causal
influence can be recovered. This provides a generalization of the additive
noise model well-known in causal discovery. Second, we derive a bivariate
measure of causal direction based on likelihood ratios, leveraging the fact
that flow models can estimate normalized log-densities of data. Third, we
demonstrate that flows naturally allow for direct evaluation of both
interventional and counterfactual queries, the latter case being possible due
to the invertible nature of flows. Finally, throughout a series of experiments
on synthetic and real data, the proposed method is shown to outperform current
approaches for causal discovery as well as making accurate interventional and
counterfactual predictions.
Related papers
- Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Identifiability Guarantees for Causal Disentanglement from Soft
Interventions [26.435199501882806]
Causal disentanglement aims to uncover a representation of data using latent variables that are interrelated through a causal model.
In this paper, we focus on the scenario where unpaired observational and interventional data are available, with each intervention changing the mechanism of a latent variable.
When the causal variables are fully observed, statistically consistent algorithms have been developed to identify the causal model under faithfulness assumptions.
arXiv Detail & Related papers (2023-07-12T15:39:39Z) - Causal normalizing flows: from theory to practice [10.733905678329675]
We use recent results on non-linear ICA to show that causal models are identifiable from observational data given a causal ordering.
Second, we analyze different design and learning choices for causal normalizing flows to capture the underlying causal data-generating process.
Third, we describe how to implement the do-operator in causal NFs, and thus, how to answer interventional and counterfactual questions.
arXiv Detail & Related papers (2023-06-08T17:58:05Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - Autoregressive flow-based causal discovery and inference [4.83420384410068]
Autoregressive flow models are well-suited to performing a range of causal inference tasks.
We exploit the fact that autoregressive architectures define an ordering over variables, analogous to a causal ordering.
We present examples over synthetic data where autoregressive flows, when trained under the correct causal ordering, are able to make accurate interventional and counterfactual predictions.
arXiv Detail & Related papers (2020-07-18T10:02:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.