Your Classifier can Secretly Suffice Multi-Source Domain Adaptation
- URL: http://arxiv.org/abs/2103.11169v1
- Date: Sat, 20 Mar 2021 12:44:13 GMT
- Title: Your Classifier can Secretly Suffice Multi-Source Domain Adaptation
- Authors: Naveen Venkat, Jogendra Nath Kundu, Durgesh Kumar Singh, Ambareesh
Revanur, R. Venkatesh Babu
- Abstract summary: Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
- Score: 72.47706604261992
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Multi-Source Domain Adaptation (MSDA) deals with the transfer of task
knowledge from multiple labeled source domains to an unlabeled target domain,
under a domain-shift. Existing methods aim to minimize this domain-shift using
auxiliary distribution alignment objectives. In this work, we present a
different perspective to MSDA wherein deep models are observed to implicitly
align the domains under label supervision. Thus, we aim to utilize implicit
alignment without additional training objectives to perform adaptation. To this
end, we use pseudo-labeled target samples and enforce a classifier agreement on
the pseudo-labels, a process called Self-supervised Implicit Alignment
(SImpAl). We find that SImpAl readily works even under category-shift among the
source domains. Further, we propose classifier agreement as a cue to determine
the training convergence, resulting in a simple training algorithm. We provide
a thorough evaluation of our approach on five benchmarks, along with detailed
insights into each component of our approach.
Related papers
- Domain Adaptation Using Pseudo Labels [16.79672078512152]
In the absence of labeled target data, unsupervised domain adaptation approaches seek to align the marginal distributions of the source and target domains.
We deploy a pretrained network to determine accurate labels for the target domain using a multi-stage pseudo-label refinement procedure.
Our results on multiple datasets demonstrate the effectiveness of our simple procedure in comparison with complex state-of-the-art techniques.
arXiv Detail & Related papers (2024-02-09T22:15:11Z) - Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.