Counterfactual Representation Learning with Balancing Weights
- URL: http://arxiv.org/abs/2010.12618v2
- Date: Wed, 24 Feb 2021 03:01:10 GMT
- Title: Counterfactual Representation Learning with Balancing Weights
- Authors: Serge Assaad, Shuxi Zeng, Chenyang Tao, Shounak Datta, Nikhil Mehta,
Ricardo Henao, Fan Li, Lawrence Carin
- Abstract summary: Key to causal inference with observational data is achieving balance in predictive features associated with each treatment type.
Recent literature has explored representation learning to achieve this goal.
We develop an algorithm for flexible, scalable and accurate estimation of causal effects.
- Score: 74.67296491574318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key to causal inference with observational data is achieving balance in
predictive features associated with each treatment type. Recent literature has
explored representation learning to achieve this goal. In this work, we discuss
the pitfalls of these strategies - such as a steep trade-off between achieving
balance and predictive power - and present a remedy via the integration of
balancing weights in causal learning. Specifically, we theoretically link
balance to the quality of propensity estimation, emphasize the importance of
identifying a proper target population, and elaborate on the complementary
roles of feature balancing and weight adjustments. Using these concepts, we
then develop an algorithm for flexible, scalable and accurate estimation of
causal effects. Finally, we show how the learned weighted representations may
serve to facilitate alternative causal learning procedures with appealing
statistical features. We conduct an extensive set of experiments on both
synthetic examples and standard benchmarks, and report encouraging results
relative to state-of-the-art baselines.
Related papers
- Visual Data Diagnosis and Debiasing with Concept Graphs [50.84781894621378]
We present ConBias, a framework for diagnosing and mitigating Concept co-occurrence Biases in visual datasets.
We show that by employing a novel clique-based concept balancing strategy, we can mitigate these imbalances, leading to enhanced performance on downstream tasks.
arXiv Detail & Related papers (2024-09-26T16:59:01Z) - Towards Representation Learning for Weighting Problems in Design-Based Causal Inference [1.1060425537315088]
We propose an end-to-end estimation procedure that learns a flexible representation, while retaining promising theoretical properties.
We show that this approach is competitive in a range of common causal inference tasks.
arXiv Detail & Related papers (2024-09-24T19:16:37Z) - Learning Confidence Bounds for Classification with Imbalanced Data [42.690254618937196]
We propose a novel framework that leverages learning theory and concentration inequalities to overcome the shortcomings of traditional solutions.
Our method can effectively adapt to the varying degrees of imbalance across different classes, resulting in more robust and reliable classification outcomes.
arXiv Detail & Related papers (2024-07-16T16:02:27Z) - Debiased Collaborative Filtering with Kernel-Based Causal Balancing [28.89858891537214]
We propose an algorithm that adaptively balances the kernel function and theoretically analyze the generalization error bound of our methods.
We conduct extensive experiments to demonstrate the effectiveness of our methods.
arXiv Detail & Related papers (2024-04-30T14:43:51Z) - Survey on Imbalanced Data, Representation Learning and SEP Forecasting [0.9065034043031668]
Deep Learning methods have significantly advanced various data-driven tasks such as regression, classification, and forecasting.
Much of this progress has been predicated on the strong but often unrealistic assumption that training datasets are balanced with respect to the targets they contain.
This misalignment with real-world conditions, where data is frequently imbalanced, hampers the effectiveness of such models in practical applications.
We present deep learning works that step away from the balanced-data assumption, employing strategies like representation learning to better approximate real-world imbalances.
arXiv Detail & Related papers (2023-10-11T15:38:53Z) - Towards Balanced Learning for Instance Recognition [149.76724446376977]
We propose Libra R-CNN, a framework towards balanced learning for instance recognition.
It integrates IoU-balanced sampling, balanced feature pyramid, and objective re-weighting, respectively for reducing the imbalance at sample, feature, and objective level.
arXiv Detail & Related papers (2021-08-23T13:40:45Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Precise Tradeoffs in Adversarial Training for Linear Regression [55.764306209771405]
We provide a precise and comprehensive understanding of the role of adversarial training in the context of linear regression with Gaussian features.
We precisely characterize the standard/robust accuracy and the corresponding tradeoff achieved by a contemporary mini-max adversarial training approach.
Our theory for adversarial training algorithms also facilitates the rigorous study of how a variety of factors (size and quality of training data, model overparametrization etc.) affect the tradeoff between these two competing accuracies.
arXiv Detail & Related papers (2020-02-24T19:01:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.