Crucial Semantic Classifier-based Adversarial Learning for Unsupervised
Domain Adaptation
- URL: http://arxiv.org/abs/2302.01708v1
- Date: Fri, 3 Feb 2023 13:06:14 GMT
- Title: Crucial Semantic Classifier-based Adversarial Learning for Unsupervised
Domain Adaptation
- Authors: Yumin Zhang, Yajun Gao, Hongliu Li, Ating Yin, Duzhen Zhang, Xiuyi
Chen
- Abstract summary: Unsupervised Domain Adaptation (UDA) aims to explore the transferrable from a well-labeled source domain to a related unlabeled target domain.
We propose Crucial Semantic-based Adrial Learning (CSCAL) to pay more attention to crucial semantic knowledge transferring.
CSCAL can be effortlessly merged into different UDA methods as a regularizer and dramatically promote their performance.
- Score: 4.6899218408452885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA), which aims to explore the transferrable
features from a well-labeled source domain to a related unlabeled target
domain, has been widely progressed. Nevertheless, as one of the mainstream,
existing adversarial-based methods neglect to filter the irrelevant semantic
knowledge, hindering adaptation performance improvement. Besides, they require
an additional domain discriminator that strives extractor to generate confused
representations, but discrete designing may cause model collapse. To tackle the
above issues, we propose Crucial Semantic Classifier-based Adversarial Learning
(CSCAL), which pays more attention to crucial semantic knowledge transferring
and leverages the classifier to implicitly play the role of domain
discriminator without extra network designing. Specifically, in
intra-class-wise alignment, a Paired-Level Discrepancy (PLD) is designed to
transfer crucial semantic knowledge. Additionally, based on classifier
predictions, a Nuclear Norm-based Discrepancy (NND) is formed that considers
inter-class-wise information and improves the adaptation performance. Moreover,
CSCAL can be effortlessly merged into different UDA methods as a regularizer
and dramatically promote their performance.
Related papers
- Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - Towards Source-free Domain Adaptive Semantic Segmentation via Importance-aware and Prototype-contrast Learning [26.544837987747766]
We propose an end-to-end source-free domain adaptation semantic segmentation method via Importance-Aware and Prototype-Contrast learning.
The proposed IAPC framework effectively extracts domain-invariant knowledge from the well-trained source model and learns domain-specific knowledge from the unlabeled target domain.
arXiv Detail & Related papers (2023-06-02T15:09:19Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.
We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - Shuffle Augmentation of Features from Unlabeled Data for Unsupervised
Domain Adaptation [21.497019000131917]
Unsupervised Domain Adaptation (UDA) is a branch of transfer learning where labels for target samples are unavailable.
In this paper, we propose Shuffle Augmentation of Features (SAF) as a novel UDA framework.
SAF learns from the target samples, adaptively distills class-aware target features, and implicitly guides the classifier to find comprehensive class borders.
arXiv Detail & Related papers (2022-01-28T07:11:05Z) - Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection [11.262560426527818]
Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data.
Recently, mainstream approaches perform this task through adversarial learning, yet still suffer from two limitations.
We propose a joint adaptive detection framework (JADF) to address the above challenges.
arXiv Detail & Related papers (2021-09-19T00:27:08Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - CSCL: Critical Semantic-Consistent Learning for Unsupervised Domain
Adaptation [42.226842513334184]
We develop a new Critical Semantic-Consistent Learning model, which mitigates the discrepancy of both domain-wise and category-wise distributions.
Specifically, a critical transfer based adversarial framework is designed to highlight transferable domain-wise knowledge while neglecting untransferable knowledge.
arXiv Detail & Related papers (2020-08-24T14:12:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.