CSCL: Critical Semantic-Consistent Learning for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2008.10464v1
- Date: Mon, 24 Aug 2020 14:12:04 GMT
- Title: CSCL: Critical Semantic-Consistent Learning for Unsupervised Domain
Adaptation
- Authors: Jiahua Dong, Yang Cong, Gan Sun, Yuyang Liu, Xiaowei Xu
- Abstract summary: We develop a new Critical Semantic-Consistent Learning model, which mitigates the discrepancy of both domain-wise and category-wise distributions.
Specifically, a critical transfer based adversarial framework is designed to highlight transferable domain-wise knowledge while neglecting untransferable knowledge.
- Score: 42.226842513334184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation without consuming annotation process for
unlabeled target data attracts appealing interests in semantic segmentation.
However, 1) existing methods neglect that not all semantic representations
across domains are transferable, which cripples domain-wise transfer with
untransferable knowledge; 2) they fail to narrow category-wise distribution
shift due to category-agnostic feature alignment. To address above challenges,
we develop a new Critical Semantic-Consistent Learning (CSCL) model, which
mitigates the discrepancy of both domain-wise and category-wise distributions.
Specifically, a critical transfer based adversarial framework is designed to
highlight transferable domain-wise knowledge while neglecting untransferable
knowledge. Transferability-critic guides transferability-quantizer to maximize
positive transfer gain under reinforcement learning manner, although negative
transfer of untransferable knowledge occurs. Meanwhile, with the help of
confidence-guided pseudo labels generator of target samples, a symmetric soft
divergence loss is presented to explore inter-class relationships and
facilitate category-wise distribution alignment. Experiments on several
datasets demonstrate the superiority of our model.
Related papers
- Crucial Semantic Classifier-based Adversarial Learning for Unsupervised
Domain Adaptation [4.6899218408452885]
Unsupervised Domain Adaptation (UDA) aims to explore the transferrable from a well-labeled source domain to a related unlabeled target domain.
We propose Crucial Semantic-based Adrial Learning (CSCAL) to pay more attention to crucial semantic knowledge transferring.
CSCAL can be effortlessly merged into different UDA methods as a regularizer and dramatically promote their performance.
arXiv Detail & Related papers (2023-02-03T13:06:14Z) - Balancing Discriminability and Transferability for Source-Free Domain
Adaptation [55.143687986324935]
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations.
The requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting.
We derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off.
arXiv Detail & Related papers (2022-06-16T09:06:22Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Weakly-Supervised Cross-Domain Adaptation for Endoscopic Lesions
Segmentation [79.58311369297635]
We propose a new weakly-supervised lesions transfer framework, which can explore transferable domain-invariant knowledge across different datasets.
A Wasserstein quantified transferability framework is developed to highlight widerange transferable contextual dependencies.
A novel self-supervised pseudo label generator is designed to equally provide confident pseudo pixel labels for both hard-to-transfer and easy-to-transfer target samples.
arXiv Detail & Related papers (2020-12-08T02:26:03Z) - Unsupervised Transfer Learning with Self-Supervised Remedy [60.315835711438936]
Generalising deep networks to novel domains without manual labels is challenging to deep learning.
Pre-learned knowledge does not transfer well without making strong assumptions about the learned and the novel domains.
In this work, we aim to learn a discriminative latent space of the unlabelled target data in a novel domain by knowledge transfer from labelled related domains.
arXiv Detail & Related papers (2020-06-08T16:42:17Z) - Continuous Transfer Learning with Label-informed Distribution Alignment [42.34180707803632]
We study a novel continuous transfer learning setting with a time evolving target domain.
One major challenge associated with continuous transfer learning is the potential occurrence of negative transfer.
We propose a generic adversarial Variational Auto-encoder framework named TransLATE.
arXiv Detail & Related papers (2020-06-05T04:44:58Z) - Learning transferable and discriminative features for unsupervised
domain adaptation [6.37626180021317]
Unsupervised domain adaptation is able to overcome this challenge by transferring knowledge from a labeled source domain to an unlabeled target domain.
In this paper, a novel method called textitlearning TransFerable and Discriminative Features for unsupervised domain adaptation (TLearning) is proposed to optimize these two objectives simultaneously.
Comprehensive experiments are conducted on five real-world datasets and the results verify the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-03-26T03:15:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.