A Unified Causal View of Domain Invariant Representation Learning
- URL: http://arxiv.org/abs/2208.06987v2
- Date: Tue, 16 Aug 2022 02:17:12 GMT
- Title: A Unified Causal View of Domain Invariant Representation Learning
- Authors: Zihao Wang and Victor Veitch
- Abstract summary: Machine learning methods can be unreliable when deployed in domains that differ from the domains on which they were trained.
This paper shows how the different methods relate to each other, and clarify the real-world circumstances under which each is expected to succeed.
- Score: 19.197022592928164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning methods can be unreliable when deployed in domains that
differ from the domains on which they were trained. To address this, we may
wish to learn representations of data that are domain-invariant in the sense
that we preserve data structure that is stable across domains, but throw out
spuriously-varying parts. There are many representation-learning approaches of
this type, including methods based on data augmentation, distributional
invariances, and risk invariance. Unfortunately, when faced with any particular
real-world domain shift, it is unclear which, if any, of these methods might be
expected to work. The purpose of this paper is to show how the different
methods relate to each other, and clarify the real-world circumstances under
which each is expected to succeed. The key tool is a new notion of domain shift
relying on the idea that causal relationships are invariant, but non-causal
relationships (e.g., due to confounding) may vary.
Related papers
- Cross-Domain Policy Adaptation by Capturing Representation Mismatch [53.087413751430255]
It is vital to learn effective policies that can be transferred to different domains with dynamics discrepancies in reinforcement learning (RL)
In this paper, we consider dynamics adaptation settings where there exists dynamics mismatch between the source domain and the target domain.
We perform representation learning only in the target domain and measure the representation deviations on the transitions from the source domain.
arXiv Detail & Related papers (2024-05-24T09:06:12Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations [80.76164484820818]
There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
arXiv Detail & Related papers (2022-10-25T21:54:26Z) - Invariant and Transportable Representations for Anti-Causal Domain
Shifts [18.530198688722752]
We show how to leverage the shared causal structure of the domains to learn a representation that both admits an invariant predictor and that allows fast adaptation in new domains.
Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed learning algorithm.
arXiv Detail & Related papers (2022-07-04T17:36:49Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Against Adversarial Learning: Naturally Distinguish Known and Unknown in
Open Set Domain Adaptation [17.819949636876018]
Open set domain adaptation refers to the scenario that the target domain contains categories that do not exist in the source domain.
We propose an "against adversarial learning" method that can distinguish unknown target data and known data naturally.
Experimental results show that the proposed method can make significant improvement in performance compared with several state-of-the-art methods.
arXiv Detail & Related papers (2020-11-04T10:30:43Z) - Respecting Domain Relations: Hypothesis Invariance for Domain
Generalization [30.14312814723027]
In domain generalization, multiple labeled non-independent and non-identically distributed source domains are available during training.
Currently, learning so-called domain invariant representations (DIRs) is the prevalent approach to domain generalization.
arXiv Detail & Related papers (2020-10-15T08:26:08Z) - Domain Agnostic Learning for Unbiased Authentication [47.85174796247398]
We propose a domain-agnostic method that eliminates domain-difference without domain labels.
latent domains are discovered by learning the heterogeneous predictive relationships between inputs and outputs.
We extend our method to a meta-learning framework to pursue more thorough domain-difference elimination.
arXiv Detail & Related papers (2020-10-11T14:05:16Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.