Universal Domain Adaptation through Self Supervision
- URL: http://arxiv.org/abs/2002.07953v3
- Date: Tue, 6 Oct 2020 03:30:01 GMT
- Title: Universal Domain Adaptation through Self Supervision
- Authors: Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Kate Saenko
- Abstract summary: Unsupervised domain adaptation methods assume that all source categories are present in the target domain.
We propose Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE) to handle arbitrary category shift.
We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings.
- Score: 75.04598763659969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation methods traditionally assume that all source
categories are present in the target domain. In practice, little may be known
about the category overlap between the two domains. While some methods address
target settings with either partial or open-set categories, they assume that
the particular setting is known a priori. We propose a more universally
applicable domain adaptation framework that can handle arbitrary category
shift, called Domain Adaptative Neighborhood Clustering via Entropy
optimization (DANCE). DANCE combines two novel ideas: First, as we cannot fully
rely on source categories to learn features discriminative for the target, we
propose a novel neighborhood clustering technique to learn the structure of the
target domain in a self-supervised way. Second, we use entropy-based feature
alignment and rejection to align target features with the source, or reject
them as unknown categories based on their entropy. We show through extensive
experiments that DANCE outperforms baselines across open-set, open-partial and
partial domain adaptation settings. Implementation is available at
https://github.com/VisionLearningGroup/DANCE.
Related papers
- Discriminative Radial Domain Adaptation [62.22362756424971]
We propose Discriminative Radial Domain Adaptation (DRDR) which bridges source and target domains via a shared radial structure.
We show that transferring such an inherently discriminative structure would enable to enhance feature transferability and discriminability simultaneously.
Our method is shown to consistently outperforms state-of-the-art approaches on varied tasks.
arXiv Detail & Related papers (2023-01-01T10:56:31Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Adaptive Methods for Aggregated Domain Generalization [26.215904177457997]
In many settings, privacy concerns prohibit obtaining domain labels for the training data samples.
We propose a domain-adaptive approach to this problem, which operates in two steps.
Our approach achieves state-of-the-art performance on a variety of domain generalization benchmarks without using domain labels.
arXiv Detail & Related papers (2021-12-09T08:57:01Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Vicinal and categorical domain adaptation [43.707303372718336]
We propose novel losses of adversarial training at both domain and category levels.
We propose a concept of vicinal domains whose instances are produced by a convex combination of pairs of instances respectively from the two domains.
arXiv Detail & Related papers (2021-03-05T03:47:24Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Exploring Category-Agnostic Clusters for Open-Set Domain Adaptation [138.29273453811945]
We present Self-Ensembling with Category-agnostic Clusters (SE-CC) -- a novel architecture that steers domain adaptation with category-agnostic clusters in target domain.
clustering is performed over all the unlabeled target samples to obtain the category-agnostic clusters, which reveal the underlying data space structure peculiar to target domain.
arXiv Detail & Related papers (2020-06-11T16:19:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.