Adversarial Network with Multiple Classifiers for Open Set Domain
Adaptation
- URL: http://arxiv.org/abs/2007.00384v3
- Date: Fri, 7 Aug 2020 10:20:22 GMT
- Title: Adversarial Network with Multiple Classifiers for Open Set Domain
Adaptation
- Authors: Tasfia Shermin, Guojun Lu, Shyh Wei Teng, Manzur Murshed, Ferdous
Sohel
- Abstract summary: This paper focuses on the type of open set domain adaptation setting where the target domain has both private ('unknown classes') label space and the shared ('known classes') label space.
Prevalent distribution-matching domain adaptation methods are inadequate in such a setting.
We propose a novel adversarial domain adaptation model with multiple auxiliary classifiers.
- Score: 9.251407403582501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation aims to transfer knowledge from a domain with adequate
labeled samples to a domain with scarce labeled samples. Prior research has
introduced various open set domain adaptation settings in the literature to
extend the applications of domain adaptation methods in real-world scenarios.
This paper focuses on the type of open set domain adaptation setting where the
target domain has both private ('unknown classes') label space and the shared
('known classes') label space. However, the source domain only has the 'known
classes' label space. Prevalent distribution-matching domain adaptation methods
are inadequate in such a setting that demands adaptation from a smaller source
domain to a larger and diverse target domain with more classes. For addressing
this specific open set domain adaptation setting, prior research introduces a
domain adversarial model that uses a fixed threshold for distinguishing known
from unknown target samples and lacks at handling negative transfers. We extend
their adversarial model and propose a novel adversarial domain adaptation model
with multiple auxiliary classifiers. The proposed multi-classifier structure
introduces a weighting module that evaluates distinctive domain characteristics
for assigning the target samples with weights which are more representative to
whether they are likely to belong to the known and unknown classes to encourage
positive transfers during adversarial training and simultaneously reduces the
domain gap between the shared classes of the source and target domains. A
thorough experimental investigation shows that our proposed method outperforms
existing domain adaptation methods on a number of domain adaptation datasets.
Related papers
- Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Open Set Domain Adaptation by Extreme Value Theory [22.826118321715455]
We tackle the open set domain adaptation problem under the assumption that the source and the target label spaces only partially overlap.
We propose an instance-level reweighting strategy for domain adaptation where the weights indicate the likelihood of a sample belonging to known classes.
Experiments on conventional domain adaptation datasets show that the proposed method outperforms the state-of-the-art models.
arXiv Detail & Related papers (2020-12-22T19:31:32Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.