Cycle-Balanced Representation Learning For Counterfactual Inference
- URL: http://arxiv.org/abs/2110.15484v1
- Date: Fri, 29 Oct 2021 01:15:16 GMT
- Title: Cycle-Balanced Representation Learning For Counterfactual Inference
- Authors: Guanglin Zhou and Lina Yao and Xiwei Xu and Chen Wang and Liming Zhu
- Abstract summary: We propose a novel framework based on Cycle-Balanced REpresentation learning for counterfactual inference (CBRE)
Specifically, we realize a robust balanced representation for different groups using adversarial training, and meanwhile construct an information loop, such that preserve original data properties cyclically.
Results on three real-world datasets demonstrate that CBRE matches/outperforms the state-of-the-art methods, and it has a great potential to be applied to counterfactual inference.
- Score: 42.229586802733806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the widespread accumulation of observational data, researchers obtain a
new direction to learn counterfactual effects in many domains (e.g., health
care and computational advertising) without Randomized Controlled Trials(RCTs).
However, observational data suffer from inherent missing counterfactual
outcomes, and distribution discrepancy between treatment and control groups due
to behaviour preference. Motivated by recent advances of representation
learning in the field of domain adaptation, we propose a novel framework based
on Cycle-Balanced REpresentation learning for counterfactual inference (CBRE),
to solve above problems. Specifically, we realize a robust balanced
representation for different groups using adversarial training, and meanwhile
construct an information loop, such that preserve original data properties
cyclically, which reduces information loss when transforming data into latent
representation space.Experimental results on three real-world datasets
demonstrate that CBRE matches/outperforms the state-of-the-art methods, and it
has a great potential to be applied to counterfactual inference.
Related papers
- Deriving Causal Order from Single-Variable Interventions: Guarantees & Algorithm [14.980926991441345]
We show that datasets containing interventional data can be effectively extracted under realistic assumptions about the data distribution.
We introduce interventional faithfulness, which relies on comparisons between the marginal distributions of each variable across observational and interventional settings.
We also introduce Intersort, an algorithm designed to infer the causal order from datasets containing large numbers of single-variable interventions.
arXiv Detail & Related papers (2024-05-28T16:07:17Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Moderately-Balanced Representation Learning for Treatment Effects with
Orthogonality Information [14.040918087553177]
Estimating the average treatment effect (ATE) from observational data is challenging due to selection bias.
We propose a moderately-balanced representation learning framework.
This framework protects the representation from being over-balanced via multi-task learning.
arXiv Detail & Related papers (2022-09-05T13:20:12Z) - Enhancing Counterfactual Classification via Self-Training [9.484178349784264]
We propose a self-training algorithm which imputes outcomes with categorical values for finite unseen actions in observational data to simulate a randomized trial through pseudolabeling.
We demonstrate the effectiveness of the proposed algorithms on both synthetic and real datasets.
arXiv Detail & Related papers (2021-12-08T18:42:58Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - Group-disentangled Representation Learning with Weakly-Supervised
Regularization [13.311886256230814]
GroupVAE is a simple yet effective Kullback-Leibler divergence-based regularization to enforce consistent and disentangled representations.
We demonstrate that learning group-disentangled representations improve upon downstream tasks, including fair classification and 3D shape-related tasks such as reconstruction, classification, and transfer learning.
arXiv Detail & Related papers (2021-10-23T10:01:05Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z) - Generalization Bounds and Representation Learning for Estimation of
Potential Outcomes and Causal Effects [61.03579766573421]
We study estimation of individual-level causal effects, such as a single patient's response to alternative medication.
We devise representation learning algorithms that minimize our bound, by regularizing the representation's induced treatment group distance.
We extend these algorithms to simultaneously learn a weighted representation to further reduce treatment group distances.
arXiv Detail & Related papers (2020-01-21T10:16:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.