Dual Adversarial Domain Adaptation
- URL: http://arxiv.org/abs/2001.00153v1
- Date: Wed, 1 Jan 2020 07:10:09 GMT
- Title: Dual Adversarial Domain Adaptation
- Authors: Yuntao Du, Zhiwen Tan, Qian Chen, Xiaowen Zhang, Yirong Yao, Chongjun
Wang
- Abstract summary: Unsupervised domain adaptation aims at transferring knowledge from the labeled source domain to the unlabeled target domain.
Recent experiments have shown that when the discriminator is provided with domain information in both domains, it is able to preserve the complex multimodal information.
We adopt a discriminator with $2K$-dimensional output to perform both domain-level and class-level alignments simultaneously in a single discriminator.
- Score: 6.69797982848003
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Unsupervised domain adaptation aims at transferring knowledge from the
labeled source domain to the unlabeled target domain. Previous adversarial
domain adaptation methods mostly adopt the discriminator with binary or
$K$-dimensional output to perform marginal or conditional alignment
independently. Recent experiments have shown that when the discriminator is
provided with domain information in both domains and label information in the
source domain, it is able to preserve the complex multimodal information and
high semantic information in both domains. Following this idea, we adopt a
discriminator with $2K$-dimensional output to perform both domain-level and
class-level alignments simultaneously in a single discriminator. However, a
single discriminator can not capture all the useful information across domains
and the relationships between the examples and the decision boundary are rarely
explored before. Inspired by multi-view learning and latest advances in domain
adaptation, besides the adversarial process between the discriminator and the
feature extractor, we also design a novel mechanism to make two discriminators
pit against each other, so that they can provide diverse information for each
other and avoid generating target features outside the support of the source
domain. To the best of our knowledge, it is the first time to explore a dual
adversarial strategy in domain adaptation. Moreover, we also use the
semi-supervised learning regularization to make the representations more
discriminative. Comprehensive experiments on two real-world datasets verify
that our method outperforms several state-of-the-art domain adaptation methods.
Related papers
- Context-aware Domain Adaptation for Time Series Anomaly Detection [69.3488037353497]
Time series anomaly detection is a challenging task with a wide range of real-world applications.
Recent efforts have been devoted to time series domain adaptation to leverage knowledge from similar domains.
We propose a framework that combines context sampling and anomaly detection into a joint learning procedure.
arXiv Detail & Related papers (2023-04-15T02:28:58Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Label Distribution Learning for Generalizable Multi-source Person
Re-identification [48.77206888171507]
Person re-identification (Re-ID) is a critical technique in the video surveillance system.
It is difficult to directly apply the supervised model to arbitrary unseen domains.
We propose a novel label distribution learning (LDL) method to address the generalizable multi-source person Re-ID task.
arXiv Detail & Related papers (2022-04-12T15:59:10Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Select, Label, and Mix: Learning Discriminative Invariant Feature
Representations for Partial Domain Adaptation [55.73722120043086]
We develop a "Select, Label, and Mix" (SLM) framework to learn discriminative invariant feature representations for partial domain adaptation.
First, we present a simple yet efficient "select" module that automatically filters out outlier source samples to avoid negative transfer.
Second, the "label" module iteratively trains the classifier using both the labeled source domain data and the generated pseudo-labels for the target domain to enhance the discriminability of the latent space.
arXiv Detail & Related papers (2020-12-06T19:29:32Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Unsupervised Domain Adaptation via Discriminative Manifold Propagation [26.23123292060868]
Unsupervised domain adaptation is effective in leveraging rich information from a labeled source domain to an unlabeled target domain.
The proposed method can be used to tackle a series of variants of domain adaptation problems, including both vanilla and partial settings.
arXiv Detail & Related papers (2020-08-23T12:31:37Z) - Adversarial Training Based Multi-Source Unsupervised Domain Adaptation
for Sentiment Analysis [19.05317868659781]
We propose two transfer learning frameworks based on the multi-source domain adaptation methodology for sentiment analysis.
The first framework is a novel Weighting Scheme based Unsupervised Domain Adaptation framework (WS-UDA), which combine the source classifiers to acquire pseudo labels for target instances.
The second framework is a Two-Stage Training based Unsupervised Domain Adaptation framework (2ST-UDA), which further exploits these pseudo labels to train a target private extractor.
arXiv Detail & Related papers (2020-06-10T01:41:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.