Revisiting Counterfactual Regression through the Lens of Gromov-Wasserstein Information Bottleneck
- URL: http://arxiv.org/abs/2405.15505v1
- Date: Fri, 24 May 2024 12:48:24 GMT
- Title: Revisiting Counterfactual Regression through the Lens of Gromov-Wasserstein Information Bottleneck
- Authors: Hao Yang, Zexu Sun, Hongteng Xu, Xu Chen,
- Abstract summary: We propose a novel learning paradigm called Gromov-Wasserstein information bottleneck (GWIB)
GWIB effectively learns the CFR model through alternating optimization, suppressing selection bias while avoiding trivial latent distributions.
Experiments on ITE estimation tasks show that GWIB consistently outperforms state-of-the-art CFR methods.
- Score: 30.31832773408621
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a promising individualized treatment effect (ITE) estimation method, counterfactual regression (CFR) maps individuals' covariates to a latent space and predicts their counterfactual outcomes. However, the selection bias between control and treatment groups often imbalances the two groups' latent distributions and negatively impacts this method's performance. In this study, we revisit counterfactual regression through the lens of information bottleneck and propose a novel learning paradigm called Gromov-Wasserstein information bottleneck (GWIB). In this paradigm, we learn CFR by maximizing the mutual information between covariates' latent representations and outcomes while penalizing the kernelized mutual information between the latent representations and the covariates. We demonstrate that the upper bound of the penalty term can be implemented as a new regularizer consisting of $i)$ the fused Gromov-Wasserstein distance between the latent representations of different groups and $ii)$ the gap between the transport cost generated by the model and the cross-group Gromov-Wasserstein distance between the latent representations and the covariates. GWIB effectively learns the CFR model through alternating optimization, suppressing selection bias while avoiding trivial latent distributions. Experiments on ITE estimation tasks show that GWIB consistently outperforms state-of-the-art CFR methods. To promote the research community, we release our project at https://github.com/peteryang1031/Causal-GWIB.
Related papers
- Rectified Diffusion Guidance for Conditional Generation [62.00207951161297]
We revisit the theory behind CFG and rigorously confirm that the improper configuration of the combination coefficients (i.e., the widely used summing-to-one version) brings about expectation shift of the generative distribution.
We propose ReCFG with a relaxation on the guidance coefficients such that denoising with ReCFG strictly aligns with the diffusion theory.
That way the rectified coefficients can be readily pre-computed via traversing the observed data, leaving the sampling speed barely affected.
arXiv Detail & Related papers (2024-10-24T13:41:32Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Delta-AI: Local objectives for amortized inference in sparse graphical models [64.5938437823851]
We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs)
Our approach is based on the observation that when the sampling of variables in a PGM is seen as a sequence of actions taken by an agent, sparsity of the PGM enables local credit assignment in the agent's policy learning objective.
We illustrate $Delta$-AI's effectiveness for sampling from synthetic PGMs and training latent variable models with sparse factor structure.
arXiv Detail & Related papers (2023-10-03T20:37:03Z) - Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A
Two-Stage Approach to Mitigate Social Biases [13.837927115198308]
We propose an adversarial training-inspired two-stage debiasing model using Contrastive learning and Continuous Prompt Augmentation.
Our approach guides the model to achieve stronger debiasing performance by adding difficulty to the training process.
arXiv Detail & Related papers (2023-07-04T09:35:03Z) - CEnt: An Entropy-based Model-agnostic Explainability Framework to
Contrast Classifiers' Decisions [2.543865489517869]
We present a novel approach to locally contrast the prediction of any classifier.
Our Contrastive Entropy-based explanation method, CEnt, approximates a model locally by a decision tree to compute entropy information of different feature splits.
CEnt is the first non-gradient-based contrastive method generating diverse counterfactuals that do not necessarily exist in the training data while satisfying immutability (ex. race) and semi-immutability (ex. age can only change in an increasing direction)
arXiv Detail & Related papers (2023-01-19T08:23:34Z) - Auto-Encoding Goodness of Fit [11.543670549371361]
We develop the Goodness of Fit Autoencoder (GoFAE), which incorporates hypothesis tests at two levels.
GoFAE achieves comparable FID scores and mean squared errors with competing deep generative models.
arXiv Detail & Related papers (2022-10-12T19:21:57Z) - Bayes in Wonderland! Predictive supervised classification inference hits
unpredictability [1.8814209805277506]
We show the convergence of the sBpc and mBpc under de Finetti type of exchangeability.
We also provide a parameter estimation of the generative model giving rise to the partition exchangeable sequence.
arXiv Detail & Related papers (2021-12-03T12:34:52Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - Preventing Posterior Collapse with Levenshtein Variational Autoencoder [61.30283661804425]
We propose to replace the evidence lower bound (ELBO) with a new objective which is simple to optimize and prevents posterior collapse.
We show that Levenstein VAE produces more informative latent representations than alternative approaches to preventing posterior collapse.
arXiv Detail & Related papers (2020-04-30T13:27:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.