De-confounding Representation Learning for Counterfactual Inference on
Continuous Treatment via Generative Adversarial Network
- URL: http://arxiv.org/abs/2307.12625v1
- Date: Mon, 24 Jul 2023 08:56:25 GMT
- Title: De-confounding Representation Learning for Counterfactual Inference on
Continuous Treatment via Generative Adversarial Network
- Authors: Yonghe Zhao, Qiang Huang, Haolong Zeng, Yun Pen, Huiyan Sun
- Abstract summary: Counterfactual inference for continuous rather than binary treatment variables is more common in real-world causal inference tasks.
We propose a de-confounding representation learning (DRL) framework for counterfactual outcome estimation of continuous treatment.
We show that the DRL model performs superiorly in learning de-confounding representations and outperforms state-of-the-art counterfactual inference models for continuous treatment variables.
- Score: 5.465397606401007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactual inference for continuous rather than binary treatment
variables is more common in real-world causal inference tasks. While there are
already some sample reweighting methods based on Marginal Structural Model for
eliminating the confounding bias, they generally focus on removing the
treatment's linear dependence on confounders and rely on the accuracy of the
assumed parametric models, which are usually unverifiable. In this paper, we
propose a de-confounding representation learning (DRL) framework for
counterfactual outcome estimation of continuous treatment by generating the
representations of covariates disentangled with the treatment variables. The
DRL is a non-parametric model that eliminates both linear and nonlinear
dependence between treatment and covariates. Specifically, we train the
correlations between the de-confounded representations and the treatment
variables against the correlations between the covariate representations and
the treatment variables to eliminate confounding bias. Further, a
counterfactual inference network is embedded into the framework to make the
learned representations serve both de-confounding and trusted inference.
Extensive experiments on synthetic datasets show that the DRL model performs
superiorly in learning de-confounding representations and outperforms
state-of-the-art counterfactual inference models for continuous treatment
variables. In addition, we apply the DRL model to a real-world medical dataset
MIMIC and demonstrate a detailed causal relationship between red cell width
distribution and mortality.
Related papers
- Contrastive Balancing Representation Learning for Heterogeneous Dose-Response Curves Estimation [34.20279432270329]
Estimating the individuals' potential response to varying treatment doses is crucial for decision-making in areas such as precision medicine and management science.
We propose a novel Contrastive balancing Representation learning Network using a partial distance measure, called CRNet, for estimating the heterogeneous dose-response curves.
arXiv Detail & Related papers (2024-03-21T08:41:53Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Rethinking Radiology Report Generation via Causal Inspired Counterfactual Augmentation [11.266364967223556]
Radiology Report Generation (RRG) draws attention as a vision-and-language interaction of biomedical fields.
Previous works inherited the ideology of traditional language generation tasks, aiming to generate paragraphs with high readability as reports.
Despite significant progress, the independence between diseases-a specific property of RRG-was neglected, yielding the models being confused by the co-occurrence of diseases brought on by the biased data distribution.
arXiv Detail & Related papers (2023-11-22T10:55:36Z) - A Causal Ordering Prior for Unsupervised Representation Learning [27.18951912984905]
Causal representation learning argues that factors of variation in a dataset are, in fact, causally related.
We propose a fully unsupervised representation learning method that considers a data generation process with a latent additive noise model.
arXiv Detail & Related papers (2023-07-11T18:12:05Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - Dimension-Free Average Treatment Effect Inference with Deep Neural
Networks [6.704751710867747]
This paper investigates the estimation and inference of the average treatment effect (ATE) using deep neural networks (DNNs) in the potential outcomes framework.
We show that both DNN estimates of ATE are consistent with dimension-free consistency rates under some assumptions on the underlying true mean regression model.
arXiv Detail & Related papers (2021-12-02T19:28:37Z) - Harmonization with Flow-based Causal Inference [12.739380441313022]
This paper presents a normalizing-flow-based method to perform counterfactual inference upon a structural causal model (SCM) to harmonize medical data.
We evaluate on multiple, large, real-world medical datasets to observe that this method leads to better cross-domain generalization compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2021-06-12T19:57:35Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - On Disentangled Representations Learned From Correlated Data [59.41587388303554]
We bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data.
We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations.
We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
arXiv Detail & Related papers (2020-06-14T12:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.