VLUCI: Variational Learning of Unobserved Confounders for Counterfactual
Inference
- URL: http://arxiv.org/abs/2308.00904v2
- Date: Thu, 7 Sep 2023 12:01:57 GMT
- Title: VLUCI: Variational Learning of Unobserved Confounders for Counterfactual
Inference
- Authors: Yonghe Zhao, Qiang Huang, Siwei Wu, Yun Peng, Huiyan Sun
- Abstract summary: Causal inference plays a vital role in diverse domains like epidemiology, healthcare, and economics.
De-confounding and counterfactual prediction in observational data has emerged as a prominent concern in causal inference research.
We propose a novel variational learning model of unobserved confounders for counterfactual inference.
- Score: 11.191748173380539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal inference plays a vital role in diverse domains like epidemiology,
healthcare, and economics. De-confounding and counterfactual prediction in
observational data has emerged as a prominent concern in causal inference
research. While existing models tackle observed confounders, the presence of
unobserved confounders remains a significant challenge, distorting causal
inference and impacting counterfactual outcome accuracy. To address this, we
propose a novel variational learning model of unobserved confounders for
counterfactual inference (VLUCI), which generates the posterior distribution of
unobserved confounders. VLUCI relaxes the unconfoundedness assumption often
overlooked by most causal inference methods. By disentangling observed and
unobserved confounders, VLUCI constructs a doubly variational inference model
to approximate the distribution of unobserved confounders, which are used for
inferring more accurate counterfactual outcomes. Extensive experiments on
synthetic and semi-synthetic datasets demonstrate VLUCI's superior performance
in inferring unobserved confounders. It is compatible with state-of-the-art
counterfactual inference models, significantly improving inference accuracy at
both group and individual levels. Additionally, VLUCI provides confidence
intervals for counterfactual outcomes, aiding decision-making in risk-sensitive
domains. We further clarify the considerations when applying VLUCI to cases
where unobserved confounders don't strictly conform to our model assumptions
using the public IHDP dataset as an example, highlighting the practical
advantages of VLUCI.
Related papers
- Counterfactual Generative Modeling with Variational Causal Inference [1.9287470458589586]
We present a novel variational Bayesian causal inference framework to handle counterfactual generative modeling tasks.
In experiments, we demonstrate the advantage of our framework compared to state-of-the-art models in counterfactual generative modeling.
arXiv Detail & Related papers (2024-10-16T16:44:12Z) - Self-Distilled Disentangled Learning for Counterfactual Prediction [49.84163147971955]
We propose the Self-Distilled Disentanglement framework, known as $SD2$.
Grounded in information theory, it ensures theoretically sound independent disentangled representations without intricate mutual information estimator designs.
Our experiments, conducted on both synthetic and real-world datasets, confirm the effectiveness of our approach.
arXiv Detail & Related papers (2024-06-09T16:58:19Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal Effect Estimation with Variational AutoEncoder and the Front Door
Criterion [23.20371860838245]
The front-door criterion is often difficult to identify the set of variables used for front-door adjustment from data.
By leveraging the ability of deep generative models in representation learning, we propose FDVAE to learn the representation of a Front-Door adjustment set with a Variational AutoEncoder.
arXiv Detail & Related papers (2023-04-24T10:04:28Z) - Augmentation by Counterfactual Explanation -- Fixing an Overconfident
Classifier [11.233334009240947]
A highly accurate but overconfident model is ill-suited for deployment in critical applications such as healthcare and autonomous driving.
This paper proposes an application of counterfactual explanations in fixing an over-confident classifier.
arXiv Detail & Related papers (2022-10-21T18:53:16Z) - Causal Inference via Nonlinear Variable Decorrelation for Healthcare
Applications [60.26261850082012]
We introduce a novel method with a variable decorrelation regularizer to handle both linear and nonlinear confounding.
We employ association rules as new representations using association rule mining based on the original features to increase model interpretability.
arXiv Detail & Related papers (2022-09-29T17:44:14Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.