Learning Generalized Gumbel-max Causal Mechanisms
- URL: http://arxiv.org/abs/2111.06888v1
- Date: Thu, 11 Nov 2021 22:02:20 GMT
- Title: Learning Generalized Gumbel-max Causal Mechanisms
- Authors: Guy Lorberbom, Daniel D. Johnson, Chris J. Maddison, Daniel Tarlow,
Tamir Hazan
- Abstract summary: We argue for choosing a causal mechanism that is best under a quantitative criteria such as minimizing variance when estimating counterfactual treatment effects.
We show that they can be trained to minimize counterfactual effect variance and other losses on a distribution of queries of interest.
- Score: 31.64007831043909
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: To perform counterfactual reasoning in Structural Causal Models (SCMs), one
needs to know the causal mechanisms, which provide factorizations of
conditional distributions into noise sources and deterministic functions
mapping realizations of noise to samples. Unfortunately, the causal mechanism
is not uniquely identified by data that can be gathered by observing and
interacting with the world, so there remains the question of how to choose
causal mechanisms. In recent work, Oberst & Sontag (2019) propose Gumbel-max
SCMs, which use Gumbel-max reparameterizations as the causal mechanism due to
an intuitively appealing counterfactual stability property. In this work, we
instead argue for choosing a causal mechanism that is best under a quantitative
criteria such as minimizing variance when estimating counterfactual treatment
effects. We propose a parameterized family of causal mechanisms that generalize
Gumbel-max. We show that they can be trained to minimize counterfactual effect
variance and other losses on a distribution of queries of interest, yielding
lower variance estimates of counterfactual treatment effect than fixed
alternatives, also generalizing to queries not seen at training time.
Related papers
- Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms [17.074858228123706]
We propose a framework for learning causally disentangled representations supervised by causally related observed labels.
We show that our framework induces highly disentangled causal factors, improves interventional robustness, and is compatible with counterfactual generation.
arXiv Detail & Related papers (2023-06-02T00:28:48Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Distinguishing Cause from Effect on Categorical Data: The Uniform
Channel Model [0.0]
Distinguishing cause from effect using observations of a pair of random variables is a core problem in causal discovery.
We propose a criterion to address the cause-effect problem with categorical variables.
We select as the most likely causal direction the one in which the conditional probability mass function is closer to a uniform channel (UC)
arXiv Detail & Related papers (2023-03-14T13:54:11Z) - Cause-Effect Inference in Location-Scale Noise Models: Maximum
Likelihood vs. Independence Testing [19.23479356810746]
A fundamental problem of causal discovery is cause-effect inference, learning the correct causal direction between two random variables.
Recently introduced heteroscedastic location-scale noise functional models (LSNMs) combine expressive power with identifiability guarantees.
We show that LSNM model selection based on maximizing likelihood achieves state-of-the-art accuracy, when the noise distributions are correctly specified.
arXiv Detail & Related papers (2023-01-26T20:48:32Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Causal Discovery in Heterogeneous Environments Under the Sparse
Mechanism Shift Hypothesis [7.895866278697778]
Machine learning approaches commonly rely on the assumption of independent and identically distributed (i.i.d.) data.
In reality, this assumption is almost always violated due to distribution shifts between environments.
We propose the Mechanism Shift Score (MSS), a score-based approach amenable to various empirical estimators.
arXiv Detail & Related papers (2022-06-04T15:39:30Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.