Treatment Effect Estimation using Invariant Risk Minimization
- URL: http://arxiv.org/abs/2103.07788v1
- Date: Sat, 13 Mar 2021 20:42:04 GMT
- Title: Treatment Effect Estimation using Invariant Risk Minimization
- Authors: Abhin Shah, Kartik Ahuja, Karthikeyan Shanmugam, Dennis Wei, Kush
Varshney, Amit Dhurandhar
- Abstract summary: In this work, we propose a new way to estimate the causal individual treatment effect (ITE) using the domain generalization framework of invariant risk minimization (IRM)
We propose an IRM-based ITE estimator aimed at tackling treatment assignment bias when there is little support overlap between the control group and the treatment group.
We show gains over classical regression approaches to ITE estimation in settings when support mismatch is more pronounced.
- Score: 32.9769365726994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inferring causal individual treatment effect (ITE) from observational data is
a challenging problem whose difficulty is exacerbated by the presence of
treatment assignment bias. In this work, we propose a new way to estimate the
ITE using the domain generalization framework of invariant risk minimization
(IRM). IRM uses data from multiple domains, learns predictors that do not
exploit spurious domain-dependent factors, and generalizes better to unseen
domains. We propose an IRM-based ITE estimator aimed at tackling treatment
assignment bias when there is little support overlap between the control group
and the treatment group. We accomplish this by creating diversity: given a
single dataset, we split the data into multiple domains artificially. These
diverse domains are then exploited by IRM to more effectively generalize
regression-based models to data regions that lack support overlap. We show
gains over classical regression approaches to ITE estimation in settings when
support mismatch is more pronounced.
Related papers
- Enlarging Feature Support Overlap for Domain Generalization [9.227839292188346]
Invariant risk minimization (IRM) addresses this issue by learning invariant features and minimizing the risk across different domains.
We propose a novel method to enlarge feature support overlap for domain generalization.
Specifically, we introduce Bayesian random data augmentation to increase sample diversity and overcome the deficiency of IRM.
arXiv Detail & Related papers (2024-07-08T09:16:42Z) - First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Domain Generalization with Adversarial Intensity Attack for Medical
Image Segmentation [27.49427483473792]
In real-world scenarios, it is common for models to encounter data from new and different domains to which they were not exposed to during training.
domain generalization (DG) is a promising direction as it enables models to handle data from previously unseen domains.
We introduce a novel DG method called Adversarial Intensity Attack (AdverIN), which leverages adversarial training to generate training data with an infinite number of styles.
arXiv Detail & Related papers (2023-04-05T19:40:51Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Self-balanced Learning For Domain Generalization [64.99791119112503]
Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics.
Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class.
We propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data.
arXiv Detail & Related papers (2021-08-31T03:17:54Z) - An Online Learning Approach to Interpolation and Extrapolation in Domain
Generalization [53.592597682854944]
We recast generalization over sub-groups as an online game between a player minimizing risk and an adversary presenting new test.
We show that ERM is provably minimax-optimal for both tasks.
arXiv Detail & Related papers (2021-02-25T19:06:48Z) - Harnessing Uncertainty in Domain Adaptation for MRI Prostate Lesion
Segmentation [15.919637739630353]
We consider translating from mp-MRI to VERDICT, a richer MRI modality involving an acquisition optimized protocol for cancer characterization.
Our results show that this allows us to extract systematically better image representations for the target domain, when used in tandem with both simple, CycleGAN-based baselines.
arXiv Detail & Related papers (2020-10-14T21:30:27Z) - The Risks of Invariant Risk Minimization [52.7137956951533]
Invariant Risk Minimization is an objective based on the idea for learning deep, invariant features of data.
We present the first analysis of classification under the IRM objective--as well as these recently proposed alternatives--under a fairly natural and general model.
We show that IRM can fail catastrophically unless the test data are sufficiently similar to the training distribution--this is precisely the issue that it was intended to solve.
arXiv Detail & Related papers (2020-10-12T14:54:32Z) - Learning Overlapping Representations for the Estimation of
Individualized Treatment Effects [97.42686600929211]
Estimating the likely outcome of alternatives from observational data is a challenging problem.
We show that algorithms that learn domain-invariant representations of inputs are often inappropriate.
We develop a deep kernel regression algorithm and posterior regularization framework that substantially outperforms the state-of-the-art on a variety of benchmarks data sets.
arXiv Detail & Related papers (2020-01-14T12:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.