Representation via Representations: Domain Generalization via
Adversarially Learned Invariant Representations
- URL: http://arxiv.org/abs/2006.11478v1
- Date: Sat, 20 Jun 2020 02:35:03 GMT
- Title: Representation via Representations: Domain Generalization via
Adversarially Learned Invariant Representations
- Authors: Zhun Deng, Frances Ding, Cynthia Dwork, Rachel Hong, Giovanni
Parmigiani, Prasad Patil, Pragya Sur
- Abstract summary: We examine adversarial censoring techniques for learning invariant representations from multiple "studies" (or domains)
In many contexts, such as medical forecasting, domain generalization from studies in populous areas provides fairness of a different flavor, not anticipated in previous work on algorithmic fairness.
- Score: 14.751829773340537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the power of censoring techniques, first developed for
learning {\em fair representations}, to address domain generalization. We
examine {\em adversarial} censoring techniques for learning invariant
representations from multiple "studies" (or domains), where each study is drawn
according to a distribution on domains. The mapping is used at test time to
classify instances from a new domain. In many contexts, such as medical
forecasting, domain generalization from studies in populous areas (where data
are plentiful), to geographically remote populations (for which no training
data exist) provides fairness of a different flavor, not anticipated in
previous work on algorithmic fairness.
We study an adversarial loss function for $k$ domains and precisely
characterize its limiting behavior as $k$ grows, formalizing and proving the
intuition, backed by experiments, that observing data from a larger number of
domains helps. The limiting results are accompanied by non-asymptotic
learning-theoretic bounds. Furthermore, we obtain sufficient conditions for
good worst-case prediction performance of our algorithm on previously unseen
domains. Finally, we decompose our mappings into two components and provide a
complete characterization of invariance in terms of this decomposition. To our
knowledge, our results provide the first formal guarantees of these kinds for
adversarial invariant domain generalization.
Related papers
- Causally Inspired Regularization Enables Domain General Representations [14.036422506623383]
Given a causal graph representing the data-generating process shared across different domains/distributions, enforcing sufficient graph-implied conditional independencies can identify domain-general (non-spurious) feature representations.
We propose a novel framework with regularizations, which we demonstrate are sufficient for identifying domain-general feature representations without a priori knowledge (or proxies) of the spurious features.
Our proposed method is effective for both (semi) synthetic and real-world data, outperforming other state-of-the-art methods in average and worst-domain transfer accuracy.
arXiv Detail & Related papers (2024-04-25T01:33:55Z) - Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Failure Modes of Domain Generalization Algorithms [29.772370301145543]
We propose an evaluation framework for domain generalization algorithms.
We show that the largest contributor to the generalization error varies across methods, datasets, regularization strengths and even training lengths.
arXiv Detail & Related papers (2021-11-26T20:04:19Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Domain-Class Correlation Decomposition for Generalizable Person
Re-Identification [34.813965300584776]
In person re-identification, the domain and class are correlated.
We show that domain adversarial learning will lose certain information about class due to this domain-class correlation.
Our model outperforms the state-of-the-art methods on the large-scale domain generalization Re-ID benchmark.
arXiv Detail & Related papers (2021-06-29T09:45:03Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z) - In Search of Lost Domain Generalization [25.43757332883202]
We implement DomainBed, a testbed for domain generalization.
We conduct extensive experiments using DomainBed and find that, when carefully implemented, empirical risk minimization shows state-of-the-art performance.
arXiv Detail & Related papers (2020-07-02T23:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.