AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain
- URL: http://arxiv.org/abs/2305.00082v2
- Date: Mon, 8 May 2023 03:35:14 GMT
- Title: AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain
- Authors: Jun Kataoka and Hyunsoo Yoon
- Abstract summary: This paper presents an unsupervised domain adaptation (UDA) method for predicting unlabeled target domain data.
We propose the Adversarial self-superVised domain Adaptation network for the TARget domain (AVATAR) algorithm.
Our proposed model significantly outperforms state-of-the-art methods on three UDA benchmarks.
- Score: 11.764601181046496
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an unsupervised domain adaptation (UDA) method for
predicting unlabeled target domain data, specific to complex UDA tasks where
the domain gap is significant. Mainstream UDA models aim to learn from both
domains and improve target discrimination by utilizing labeled source domain
data. However, the performance boost may be limited when the discrepancy
between the source and target domains is large or the target domain contains
outliers. To explicitly address this issue, we propose the Adversarial
self-superVised domain Adaptation network for the TARget domain (AVATAR)
algorithm. It outperforms state-of-the-art UDA models by concurrently reducing
domain discrepancy while enhancing discrimination through domain adversarial
learning, self-supervised learning, and sample selection strategy for the
target domain, all guided by deep clustering. Our proposed model significantly
outperforms state-of-the-art methods on three UDA benchmarks, and extensive
ablation studies and experiments demonstrate the effectiveness of our approach
for addressing complex UDA tasks.
Related papers
- Overcoming Negative Transfer by Online Selection: Distant Domain Adaptation for Fault Diagnosis [42.85741244467877]
The term distant domain adaptation problem' describes the challenge of adapting from a labeled source domain to a significantly disparate unlabeled target domain.
This problem exhibits the risk of negative transfer, where extraneous knowledge from the source domain adversely affects the target domain performance.
In response to this challenge, we propose a novel Online Selective Adversarial Alignment (OSAA) approach.
arXiv Detail & Related papers (2024-05-25T07:17:47Z) - Style Adaptation for Domain-adaptive Semantic Segmentation [2.1365683052370046]
Domain discrepancy leads to a significant decrease in the performance of general network models trained on the source domain data when applied to the target domain.
We introduce a straightforward approach to mitigate the domain discrepancy, which necessitates no additional parameter calculations and seamlessly integrates with self-training-based UDA methods.
Our proposed method attains a noteworthy UDA performance of 76.93 mIoU on the GTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage points over the previous state-of-the-art results.
arXiv Detail & Related papers (2024-04-25T02:51:55Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation [3.367755441623275]
Multi-source unsupervised domain adaptation (MUDA) aims to transfer knowledge from related source domains to an unlabeled target domain.
We propose a novel approach called Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation (D3AAMDA)
This mechanism controls the alignment level of features between each source domain and the target domain, effectively leveraging the local advantageous feature information within the source domains.
arXiv Detail & Related papers (2023-07-26T09:40:19Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Unsupervised Domain Expansion for Visual Categorization [12.427064803221729]
unsupervised domain expansion (UDE) aims to adapt a deep model for the target domain with its unlabeled data, while maintaining the model's performance on the source domain.
We develop a knowledge distillation based learning mechanism, enabling KDDE to optimize a single objective wherein the source and target domains are equally treated.
arXiv Detail & Related papers (2021-04-01T03:27:35Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.