Domain-Generalizable Multiple-Domain Clustering
- URL: http://arxiv.org/abs/2301.13530v2
- Date: Wed, 31 Jan 2024 17:29:26 GMT
- Title: Domain-Generalizable Multiple-Domain Clustering
- Authors: Amit Rozner, Barak Battash, Lior Wolf, Ofir Lindenbaum
- Abstract summary: This work generalizes the problem of unsupervised domain generalization to the case in which no labeled samples are available (completely unsupervised).
We are given unlabeled samples from multiple source domains, and we aim to learn a shared predictor that assigns examples to semantically related clusters.
Evaluation is done by predicting cluster assignments in previously unseen domains.
- Score: 55.295300263404265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work generalizes the problem of unsupervised domain generalization to
the case in which no labeled samples are available (completely unsupervised).
We are given unlabeled samples from multiple source domains, and we aim to
learn a shared predictor that assigns examples to semantically related
clusters. Evaluation is done by predicting cluster assignments in previously
unseen domains. Towards this goal, we propose a two-stage training framework:
(1) self-supervised pre-training for extracting domain invariant semantic
features. (2) multi-head cluster prediction with pseudo labels, which rely on
both the feature space and cluster head prediction, further leveraging a novel
prediction-based label smoothing scheme. We demonstrate empirically that our
model is more accurate than baselines that require fine-tuning using samples
from the target domain or some level of supervision. Our code is available at
https://github.com/AmitRozner/domain-generalizable-multiple-domain-clustering.
Related papers
- Adaptive Methods for Aggregated Domain Generalization [26.215904177457997]
In many settings, privacy concerns prohibit obtaining domain labels for the training data samples.
We propose a domain-adaptive approach to this problem, which operates in two steps.
Our approach achieves state-of-the-art performance on a variety of domain generalization benchmarks without using domain labels.
arXiv Detail & Related papers (2021-12-09T08:57:01Z) - Multi-Level Features Contrastive Networks for Unsupervised Domain
Adaptation [6.934905764152813]
Unsupervised domain adaptation aims to train a model from the labeled source domain to make predictions on the unlabeled target domain.
Existing methods tend to align the two domains directly at the domain-level, or perform class-level domain alignment based on deep feature.
In this paper, we develop this work on the method of class-level alignment.
arXiv Detail & Related papers (2021-09-14T09:23:27Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Inductive Unsupervised Domain Adaptation for Few-Shot Classification via
Clustering [16.39667909141402]
Few-shot classification tends to struggle when it needs to adapt to diverse domains.
We introduce a framework, DaFeC, to improve Domain adaptation performance for Few-shot classification via Clustering.
Our approach outperforms previous work with absolute gains (in classification accuracy) of 4.95%, 9.55%, 3.99% and 11.62%, respectively.
arXiv Detail & Related papers (2020-06-23T08:17:48Z) - Exploring Category-Agnostic Clusters for Open-Set Domain Adaptation [138.29273453811945]
We present Self-Ensembling with Category-agnostic Clusters (SE-CC) -- a novel architecture that steers domain adaptation with category-agnostic clusters in target domain.
clustering is performed over all the unlabeled target samples to obtain the category-agnostic clusters, which reveal the underlying data space structure peculiar to target domain.
arXiv Detail & Related papers (2020-06-11T16:19:02Z) - Universal Domain Adaptation through Self Supervision [75.04598763659969]
Unsupervised domain adaptation methods assume that all source categories are present in the target domain.
We propose Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE) to handle arbitrary category shift.
We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings.
arXiv Detail & Related papers (2020-02-19T01:26:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.