Contrastive ACE: Domain Generalization Through Alignment of Causal
Mechanisms
- URL: http://arxiv.org/abs/2106.00925v1
- Date: Wed, 2 Jun 2021 04:01:22 GMT
- Title: Contrastive ACE: Domain Generalization Through Alignment of Causal
Mechanisms
- Authors: Yunqi Wang, Furui Liu, Zhitang Chen, Qing Lian, Shoubo Hu, Jianye Hao,
Yik-Chung Wu
- Abstract summary: Domain generalization aims to learn knowledge invariant across different distributions.
We consider the causal invariance of the average causal effect of the features to the labels.
- Score: 34.99779761100095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization aims to learn knowledge invariant across different
distributions while semantically meaningful for downstream tasks from multiple
source domains, to improve the model's generalization ability on unseen target
domains. The fundamental objective is to understand the underlying "invariance"
behind these observational distributions and such invariance has been shown to
have a close connection to causality. While many existing approaches make use
of the property that causal features are invariant across domains, we consider
the causal invariance of the average causal effect of the features to the
labels. This invariance regularizes our training approach in which
interventions are performed on features to enforce stability of the causal
prediction by the classifier across domains. Our work thus sheds some light on
the domain generalization problem by introducing invariance of the mechanisms
into the learning process. Experiments on several benchmark datasets
demonstrate the performance of the proposed method against SOTAs.
Related papers
- Causal Representation-Based Domain Generalization on Gaze Estimation [10.283904882611463]
We propose the Causal Representation-Based Domain Generalization on Gaze Estimation framework.
We employ an adversarial training manner and an additional penalizing term to extract domain-invariant features.
By leveraging these modules, CauGE ensures that the neural networks learn from representations that meet the causal mechanisms' general principles.
arXiv Detail & Related papers (2024-08-30T01:45:22Z) - Causality-inspired Latent Feature Augmentation for Single Domain Generalization [13.735443005394773]
Single domain generalization (Single-DG) intends to develop a generalizable model with only one single training domain to perform well on other unknown target domains.
Under the domain-hungry configuration, how to expand the coverage of source domain and find intrinsic causal features across different distributions is the key to enhancing the models' generalization ability.
We propose a novel causality-inspired latent feature augmentation method for Single-DG by learning the meta-knowledge of feature-level transformation based on causal learning and interventions.
arXiv Detail & Related papers (2024-06-10T02:42:25Z) - Algorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously [28.24666589680547]
We introduce a simple but effective approach that aims to learn a fair and invariant classifier.
By augmenting various synthetic data domains through the model, we learn a fair and invariant classifier in source domains.
This classifier can then be generalized to unknown target domains, maintaining both model prediction and fairness concerns.
arXiv Detail & Related papers (2023-11-23T05:52:00Z) - Multi-Domain Causal Representation Learning via Weak Distributional
Invariances [27.72497122405241]
Causal representation learning has emerged as the center of action in causal machine learning research.
We show that autoencoders that incorporate such invariances can provably identify the stable set of latents from the rest across different settings.
arXiv Detail & Related papers (2023-10-04T14:41:41Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Variational Disentanglement for Domain Generalization [68.85458536180437]
We propose to tackle the problem of domain generalization by delivering an effective framework named Variational Disentanglement Network (VDN)
VDN is capable of disentangling the domain-specific features and task-specific features, where the task-specific features are expected to be better generalized to unseen but related test data.
arXiv Detail & Related papers (2021-09-13T09:55:32Z) - Self-balanced Learning For Domain Generalization [64.99791119112503]
Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics.
Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class.
We propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data.
arXiv Detail & Related papers (2021-08-31T03:17:54Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Few-shot Domain Adaptation by Causal Mechanism Transfer [107.08605582020866]
We study few-shot supervised domain adaptation (DA) for regression problems, where only a few labeled target domain data and many labeled source domain data are available.
Many of the current DA methods base their transfer assumptions on either parametrized distribution shift or apparent distribution similarities.
We propose mechanism transfer, a meta-distributional scenario in which a data generating mechanism is invariant among domains.
arXiv Detail & Related papers (2020-02-10T02:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.