Adaptive Methods for Aggregated Domain Generalization
- URL: http://arxiv.org/abs/2112.04766v1
- Date: Thu, 9 Dec 2021 08:57:01 GMT
- Title: Adaptive Methods for Aggregated Domain Generalization
- Authors: Xavier Thomas, Dhruv Mahajan, Alex Pentland, Abhimanyu Dubey
- Abstract summary: In many settings, privacy concerns prohibit obtaining domain labels for the training data samples.
We propose a domain-adaptive approach to this problem, which operates in two steps.
Our approach achieves state-of-the-art performance on a variety of domain generalization benchmarks without using domain labels.
- Score: 26.215904177457997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization involves learning a classifier from a heterogeneous
collection of training sources such that it generalizes to data drawn from
similar unknown target domains, with applications in large-scale learning and
personalized inference. In many settings, privacy concerns prohibit obtaining
domain labels for the training data samples, and instead only have an
aggregated collection of training points. Existing approaches that utilize
domain labels to create domain-invariant feature representations are
inapplicable in this setting, requiring alternative approaches to learn
generalizable classifiers. In this paper, we propose a domain-adaptive approach
to this problem, which operates in two steps: (a) we cluster training data
within a carefully chosen feature space to create pseudo-domains, and (b) using
these pseudo-domains we learn a domain-adaptive classifier that makes
predictions using information about both the input and the pseudo-domain it
belongs to. Our approach achieves state-of-the-art performance on a variety of
domain generalization benchmarks without using domain labels whatsoever.
Furthermore, we provide novel theoretical guarantees on domain generalization
using cluster information. Our approach is amenable to ensemble-based methods
and provides substantial gains even on large-scale benchmark datasets. The code
can be found at: https://github.com/xavierohan/AdaClust_DomainBed
Related papers
- Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Adaptive Methods for Real-World Domain Generalization [32.030688845421594]
In our work, we investigate whether it is possible to leverage domain information from unseen test samples themselves.
We propose a domain-adaptive approach consisting of two steps: a) we first learn a discriminative domain embedding from unsupervised training examples, and b) use this domain embedding as supplementary information to build a domain-adaptive model.
Our approach achieves state-of-the-art performance on various domain generalization benchmarks.
arXiv Detail & Related papers (2021-03-29T17:44:35Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Universal Domain Adaptation through Self Supervision [75.04598763659969]
Unsupervised domain adaptation methods assume that all source categories are present in the target domain.
We propose Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE) to handle arbitrary category shift.
We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings.
arXiv Detail & Related papers (2020-02-19T01:26:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.