Autoregressive flow-based causal discovery and inference
- URL: http://arxiv.org/abs/2007.09390v2
- Date: Sun, 26 Jul 2020 21:34:07 GMT
- Title: Autoregressive flow-based causal discovery and inference
- Authors: Ricardo Pio Monti, Ilyes Khemakhem, Aapo Hyvarinen
- Abstract summary: Autoregressive flow models are well-suited to performing a range of causal inference tasks.
We exploit the fact that autoregressive architectures define an ordering over variables, analogous to a causal ordering.
We present examples over synthetic data where autoregressive flows, when trained under the correct causal ordering, are able to make accurate interventional and counterfactual predictions.
- Score: 4.83420384410068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We posit that autoregressive flow models are well-suited to performing a
range of causal inference tasks - ranging from causal discovery to making
interventional and counterfactual predictions. In particular, we exploit the
fact that autoregressive architectures define an ordering over variables,
analogous to a causal ordering, in order to propose a single flow architecture
to perform all three aforementioned tasks. We first leverage the fact that flow
models estimate normalized log-densities of data to derive a bivariate measure
of causal direction based on likelihood ratios. Whilst traditional measures of
causal direction often require restrictive assumptions on the nature of causal
relationships (e.g., linearity),the flexibility of flow models allows for
arbitrary causal dependencies. Our approach compares favourably against
alternative methods on synthetic data as well as on the Cause-Effect Pairs
bench-mark dataset. Subsequently, we demonstrate that the invertible nature of
flows naturally allows for direct evaluation of both interventional and
counterfactual predictions, which require marginalization and conditioning over
latent variables respectively. We present examples over synthetic data where
autoregressive flows, when trained under the correct causal ordering, are able
to make accurate interventional and counterfactual predictions
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Causality-oriented robustness: exploiting general additive interventions [3.871660145364189]
In this paper, we focus on causality-oriented robustness and propose Distributional Robustness via Invariant Gradients (DRIG)
In a linear setting, we prove that DRIG yields predictions that are robust among a data-dependent class of distribution shifts.
We extend our approach to the semi-supervised domain adaptation setting to further improve prediction performance.
arXiv Detail & Related papers (2023-07-18T16:22:50Z) - Identifiability Guarantees for Causal Disentanglement from Soft
Interventions [26.435199501882806]
Causal disentanglement aims to uncover a representation of data using latent variables that are interrelated through a causal model.
In this paper, we focus on the scenario where unpaired observational and interventional data are available, with each intervention changing the mechanism of a latent variable.
When the causal variables are fully observed, statistically consistent algorithms have been developed to identify the causal model under faithfulness assumptions.
arXiv Detail & Related papers (2023-07-12T15:39:39Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Causal Autoregressive Flows [4.731404257629232]
We highlight an intrinsic correspondence between a simple family of autoregressive normalizing flows and identifiable causal models.
We exploit the fact that autoregressive flow architectures define an ordering over variables, analogous to a causal ordering, to show that they are well-suited to performing a range of causal inference tasks.
arXiv Detail & Related papers (2020-11-04T13:17:35Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.