Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation
- URL: http://arxiv.org/abs/2103.11661v1
- Date: Mon, 22 Mar 2021 08:32:55 GMT
- Title: Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation
- Authors: Xin Jin, Cuiling Lan, Wenjun Zeng, Zhibo Chen
- Abstract summary: Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
- Score: 88.86865069583149
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Many unsupervised domain adaptation (UDA) methods exploit domain adversarial
training to align the features to reduce domain gap, where a feature extractor
is trained to fool a domain discriminator in order to have aligned feature
distributions. The discrimination capability of the domain classifier w.r.t the
increasingly aligned feature distributions deteriorates as training goes on,
thus cannot effectively further drive the training of feature extractor. In
this work, we propose an efficient optimization strategy named Re-enforceable
Adversarial Domain Adaptation (RADA) which aims to re-energize the domain
discriminator during the training by using dynamic domain labels. Particularly,
we relabel the well aligned target domain samples as source domain samples on
the fly. Such relabeling makes the less separable distributions more separable,
and thus leads to a more powerful domain classifier w.r.t. the new data
distributions, which in turn further drives feature alignment. Extensive
experiments on multiple UDA benchmarks demonstrate the effectiveness and
superiority of our RADA.
Related papers
- Contrastive Adversarial Training for Unsupervised Domain Adaptation [2.432037584128226]
Domain adversarial training has been successfully adopted for various domain adaptation tasks.
Large models make adversarial training being easily biased towards source domain and hardly adapted to target domain.
We propose contrastive adversarial training (CAT) approach that leverages the labeled source domain samples to reinforce and regulate the feature generation for target domain.
arXiv Detail & Related papers (2024-07-17T17:59:21Z) - Deep Feature Registration for Unsupervised Domain Adaptation [15.246480756974963]
We propose a deep feature registration (DFR) model to generate registered features that maintain domain invariant features.
We also employ a pseudo label refinement process to improve the quality of pseudo labels in the target domain.
arXiv Detail & Related papers (2023-10-24T18:04:53Z) - AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain [11.764601181046496]
This paper presents an unsupervised domain adaptation (UDA) method for predicting unlabeled target domain data.
We propose the Adversarial self-superVised domain Adaptation network for the TARget domain (AVATAR) algorithm.
Our proposed model significantly outperforms state-of-the-art methods on three UDA benchmarks.
arXiv Detail & Related papers (2023-04-28T20:31:56Z) - AIR-DA: Adversarial Image Reconstruction for Unsupervised Domain
Adaptive Object Detection [28.22783703278792]
Adrial Image Reconstruction (AIR) as the regularizer to facilitate the adversarial training of the feature extractor.
Our evaluations across several datasets of challenging domain shifts demonstrate that the proposed method outperforms all previous methods.
arXiv Detail & Related papers (2023-03-27T16:51:51Z) - CA-UDA: Class-Aware Unsupervised Domain Adaptation with Optimal
Assignment and Pseudo-Label Refinement [84.10513481953583]
unsupervised domain adaptation (UDA) focuses on the selection of good pseudo-labels as surrogates for the missing labels in the target data.
source domain bias that deteriorates the pseudo-labels can still exist since the shared network of the source and target domains are typically used for the pseudo-label selections.
We propose CA-UDA to improve the quality of the pseudo-labels and UDA results with optimal assignment, a pseudo-label refinement strategy and class-aware domain alignment.
arXiv Detail & Related papers (2022-05-26T18:45:04Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Disentanglement-based Cross-Domain Feature Augmentation for Effective
Unsupervised Domain Adaptive Person Re-identification [87.72851934197936]
Unsupervised domain adaptive (UDA) person re-identification (ReID) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain for person matching.
One challenge is how to generate target domain samples with reliable labels for training.
We propose a Disentanglement-based Cross-Domain Feature Augmentation strategy.
arXiv Detail & Related papers (2021-03-25T15:28:41Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.