Unsupervised Structural-Counterfactual Generation under Domain Shift
- URL: http://arxiv.org/abs/2502.12013v2
- Date: Sat, 22 Mar 2025 12:42:42 GMT
- Title: Unsupervised Structural-Counterfactual Generation under Domain Shift
- Authors: Krishn Vishwas Kher, Lokesh Venkata Siva Maruthi Badisa, Kusampudi Venkata Datta Sri Harsha, Chitneedi Geetha Sowmya, Saksham Mittal, SakethaNath Jagarlapudi,
- Abstract summary: We present a novel generative modeling challenge: generating counterfactual samples in a target domain based on factual observations from a source domain.<n>Our framework combines the posterior distribution of effect-intrinsic variables from the source domain with the prior distribution of domain-intrinsic variables from the target domain to synthesize the desired counterfactuals.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by the burgeoning interest in cross-domain learning, we present a novel generative modeling challenge: generating counterfactual samples in a target domain based on factual observations from a source domain. Our approach operates within an unsupervised paradigm devoid of parallel or joint datasets, relying exclusively on distinct observational samples and causal graphs for each domain. This setting presents challenges that surpass those of conventional counterfactual generation. Central to our methodology is the disambiguation of exogenous causes into effect-intrinsic and domain-intrinsic categories. This differentiation facilitates the integration of domain-specific causal graphs into a unified joint causal graph via shared effect-intrinsic exogenous variables. We propose leveraging Neural Causal models within this joint framework to enable accurate counterfactual generation under standard identifiability assumptions. Furthermore, we introduce a novel loss function that effectively segregates effect-intrinsic from domain-intrinsic variables during model training. Given a factual observation, our framework combines the posterior distribution of effect-intrinsic variables from the source domain with the prior distribution of domain-intrinsic variables from the target domain to synthesize the desired counterfactuals, adhering to Pearl's causal hierarchy. Intriguingly, when domain shifts are restricted to alterations in causal mechanisms without accompanying covariate shifts, our training regimen parallels the resolution of a conditional optimal transport problem. Empirical evaluations on a synthetic dataset show that our framework generates counterfactuals in the target domain that very closely resemble the ground truth.
Related papers
- Partial Transportability for Domain Generalization [56.37032680901525]
Building on the theory of partial identification and transportability, this paper introduces new results for bounding the value of a functional of the target distribution.
Our contribution is to provide the first general estimation technique for transportability problems.
We propose a gradient-based optimization scheme for making scalable inferences in practice.
arXiv Detail & Related papers (2025-03-30T22:06:37Z) - Time Series Domain Adaptation via Latent Invariant Causal Mechanism [28.329164754662354]
Time series domain adaptation aims to transfer the complex temporal dependence from the labeled source domain to the unlabeled target domain.
Recent advances leverage the stable causal mechanism over observed variables to model the domain-invariant temporal dependence.
However, modeling precise causal structures in high-dimensional data, such as videos, remains challenging.
arXiv Detail & Related papers (2025-02-23T16:25:58Z) - Optimal Aggregation of Prediction Intervals under Unsupervised Domain Shift [9.387706860375461]
A distribution shift occurs when the underlying data-generating process changes, leading to a deviation in the model's performance.
The prediction interval serves as a crucial tool for characterizing uncertainties induced by their underlying distribution.
We propose methodologies for aggregating prediction intervals to obtain one with minimal width and adequate coverage on the target domain.
arXiv Detail & Related papers (2024-05-16T17:55:42Z) - Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Algorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously [28.24666589680547]
We introduce a simple but effective approach that aims to learn a fair and invariant classifier.
By augmenting various synthetic data domains through the model, we learn a fair and invariant classifier in source domains.
This classifier can then be generalized to unknown target domains, maintaining both model prediction and fairness concerns.
arXiv Detail & Related papers (2023-11-23T05:52:00Z) - Cross Contrasting Feature Perturbation for Domain Generalization [11.863319505696184]
Domain generalization aims to learn a robust model from source domains that generalize well on unseen target domains.
Recent studies focus on generating novel domain samples or features to diversify distributions complementary to source domains.
We propose an online one-stage Cross Contrasting Feature Perturbation framework to simulate domain shift.
arXiv Detail & Related papers (2023-07-24T03:27:41Z) - Variational Counterfactual Prediction under Runtime Domain Corruption [50.89405221574912]
Co-occurrence of domain shift and inaccessible variables runtime domain corruption seriously impairs generalizability of trained counterfactual predictor.
We build an adversarially unified variational causal effect model, named VEGAN, with a novel two-stage adversarial domain adaptation scheme.
We demonstrate that VEGAN outperforms other state-of-the-art baselines on individual-level treatment effect estimation in the presence of runtime domain corruption.
arXiv Detail & Related papers (2023-06-23T02:54:34Z) - Demystifying Disagreement-on-the-Line in High Dimensions [34.103373453782744]
We develop a theoretical foundation for analyzing disagreement in high-dimensional random features regression.
Experiments on CIFAR-10-C, Tiny ImageNet-C, and Camelyon17 are consistent with our theory and support the universality of the theoretical findings.
arXiv Detail & Related papers (2023-01-31T02:31:18Z) - Relation Matters: Foreground-aware Graph-based Relational Reasoning for
Domain Adaptive Object Detection [81.07378219410182]
We propose a new and general framework for DomainD, named Foreground-aware Graph-based Reasoning (FGRR)
FGRR incorporates graph structures into the detection pipeline to explicitly model the intra- and inter-domain foreground object relations.
Empirical results demonstrate that the proposed FGRR exceeds the state-of-the-art on four DomainD benchmarks.
arXiv Detail & Related papers (2022-06-06T05:12:48Z) - Which Invariance Should We Transfer? A Causal Minimax Learning Approach [18.71316951734806]
We present a comprehensive minimax analysis from a causal perspective.
We propose an efficient algorithm to search for the subset with minimal worst-case risk.
The effectiveness and efficiency of our methods are demonstrated on synthetic data and the diagnosis of Alzheimer's disease.
arXiv Detail & Related papers (2021-07-05T09:07:29Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.