Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation
- URL: http://arxiv.org/abs/2003.03787v2
- Date: Tue, 10 Mar 2020 09:15:32 GMT
- Title: Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation
- Authors: Dongliang Chang, Aneeshan Sain, Zhanyu Ma, Yi-Zhe Song, Jun Guo
- Abstract summary: Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
- Score: 65.38975706997088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation aims to leverage labeled data from a source
domain to learn a classifier for an unlabeled target domain. Among its many
variants, open set domain adaptation (OSDA) is perhaps the most challenging, as
it further assumes the presence of unknown classes in the target domain. In
this paper, we study OSDA with a particular focus on enriching its ability to
traverse across larger domain gaps. Firstly, we show that existing
state-of-the-art methods suffer a considerable performance drop in the presence
of larger domain gaps, especially on a new dataset (PACS) that we re-purposed
for OSDA. We then propose a novel framework to specifically address the larger
domain gaps. The key insight lies with how we exploit the mutually beneficial
information between two networks; (a) to separate samples of known and unknown
classes, (b) to maximize the domain confusion between source and target domain
without the influence of unknown samples. It follows that (a) and (b) will
mutually supervise each other and alternate until convergence. Extensive
experiments are conducted on Office-31, Office-Home, and PACS datasets,
demonstrating the superiority of our method in comparison to other
state-of-the-arts. Code available at
https://github.com/dongliangchang/Mutual-to-Separate/
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - Open-Set Domain Adaptation for Semantic Segmentation [6.3951361316638815]
We introduce Open-Set Domain Adaptation for Semantic (OSDA-SS) for the first time, where the target domain includes unknown classes.
To address these issues, we propose Boundary and Unknown Shape-Aware open-set domain adaptation, coined BUS.
Our BUS can accurately discern the boundaries between known and unknown classes in a contrastive manner using a novel dilation-erosion-based contrastive loss.
arXiv Detail & Related papers (2024-05-30T09:55:19Z) - Domain Adaptive Few-Shot Open-Set Learning [36.39622440120531]
We propose Domain Adaptive Few-Shot Open Set Recognition (DA-FSOS) and introduce a meta-learning-based architecture named DAFOSNET.
Our training approach ensures that DAFOS-NET can generalize well to new scenarios in the target domain.
We present three benchmarks for DA-FSOS based on the Office-Home, mini-ImageNet/CUB, and DomainNet datasets.
arXiv Detail & Related papers (2023-09-22T12:04:47Z) - Make the U in UDA Matter: Invariant Consistency Learning for
Unsupervised Domain Adaptation [86.61336696914447]
We dub our approach "Invariant CONsistency learning" (ICON)
We propose to make the U in Unsupervised DA matter by giving equal status to the two domains.
ICON achieves the state-of-the-art performance on the classic UDA benchmarks: Office-Home and VisDA-2017, and outperforms all the conventional methods on the challenging WILDS 2.0 benchmark.
arXiv Detail & Related papers (2023-09-22T09:43:32Z) - Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for
Semantic Segmentation [91.30558794056056]
Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently.
We present a novel framework based on three main design principles: discover, hallucinate, and adapt.
We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results.
arXiv Detail & Related papers (2021-10-08T13:20:09Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.