Variational Counterfactual Prediction under Runtime Domain Corruption
- URL: http://arxiv.org/abs/2306.13271v1
- Date: Fri, 23 Jun 2023 02:54:34 GMT
- Title: Variational Counterfactual Prediction under Runtime Domain Corruption
- Authors: Hechuan Wen, Tong Chen, Li Kheng Chai, Shazia Sadiq, Junbin Gao,
Hongzhi Yin
- Abstract summary: Co-occurrence of domain shift and inaccessible variables runtime domain corruption seriously impairs generalizability of trained counterfactual predictor.
We build an adversarially unified variational causal effect model, named VEGAN, with a novel two-stage adversarial domain adaptation scheme.
We demonstrate that VEGAN outperforms other state-of-the-art baselines on individual-level treatment effect estimation in the presence of runtime domain corruption.
- Score: 50.89405221574912
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To date, various neural methods have been proposed for causal effect
estimation based on observational data, where a default assumption is the same
distribution and availability of variables at both training and inference
(i.e., runtime) stages. However, distribution shift (i.e., domain shift) could
happen during runtime, and bigger challenges arise from the impaired
accessibility of variables. This is commonly caused by increasing privacy and
ethical concerns, which can make arbitrary variables unavailable in the entire
runtime data and imputation impractical. We term the co-occurrence of domain
shift and inaccessible variables runtime domain corruption, which seriously
impairs the generalizability of a trained counterfactual predictor. To counter
runtime domain corruption, we subsume counterfactual prediction under the
notion of domain adaptation. Specifically, we upper-bound the error w.r.t. the
target domain (i.e., runtime covariates) by the sum of source domain error and
inter-domain distribution distance. In addition, we build an adversarially
unified variational causal effect model, named VEGAN, with a novel two-stage
adversarial domain adaptation scheme to reduce the latent distribution
disparity between treated and control groups first, and between training and
runtime variables afterwards. We demonstrate that VEGAN outperforms other
state-of-the-art baselines on individual-level treatment effect estimation in
the presence of runtime domain corruption on benchmark datasets.
Related papers
- Optimal Aggregation of Prediction Intervals under Unsupervised Domain Shift [9.387706860375461]
A distribution shift occurs when the underlying data-generating process changes, leading to a deviation in the model's performance.
The prediction interval serves as a crucial tool for characterizing uncertainties induced by their underlying distribution.
We propose methodologies for aggregating prediction intervals to obtain one with minimal width and adequate coverage on the target domain.
arXiv Detail & Related papers (2024-05-16T17:55:42Z) - Proxy Methods for Domain Adaptation [78.03254010884783]
proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables.
We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings.
arXiv Detail & Related papers (2024-03-12T09:32:41Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Controlled Generation of Unseen Faults for Partial and OpenSet&Partial
Domain Adaptation [0.0]
New operating conditions can result in a performance drop of fault diagnostics models due to the domain gap between the training and the testing data distributions.
We propose a new framework based on a Wasserstein GAN for Partial and OpenSet&Partial domain adaptation.
The main contribution is the controlled fault data generation that enables to generate unobserved fault types and severity levels in the target domain.
arXiv Detail & Related papers (2022-04-29T13:05:25Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - Transferable Time-Series Forecasting under Causal Conditional Shift [28.059991304278572]
We propose an end-to-end model for the semi-supervised domain adaptation problem on time-series forecasting.
Our method can not only discover the Granger-Causal structures among cross-domain data but also address the cross-domain time-series forecasting problem with accurate and interpretable predicted results.
arXiv Detail & Related papers (2021-11-05T11:50:07Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - A Brief Review of Domain Adaptation [1.2043574473965317]
This paper focuses on unsupervised domain adaptation, where the labels are only available in the source domain.
It presents some successful shallow and deep domain adaptation approaches that aim to deal with domain adaptation problems.
arXiv Detail & Related papers (2020-10-07T07:05:32Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.