Deep Subdomain Adaptation Network for Image Classification
- URL: http://arxiv.org/abs/2106.09388v1
- Date: Thu, 17 Jun 2021 11:07:21 GMT
- Title: Deep Subdomain Adaptation Network for Image Classification
- Authors: Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Guolin Ke, Jingwu Chen,
Jiang Bian, Hui Xiong and Qing He
- Abstract summary: Deep Subdomain Adaptation Network (DSAN) learns a transfer network by aligning the relevant subdomain distributions of domain-specific layer activations.
Our DSAN is very simple but effective which does not need adversarial training and converges fast.
Experiments demonstrate remarkable results on both object recognition tasks and digit classification tasks.
- Score: 32.58984565281493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For a target task where labeled data is unavailable, domain adaptation can
transfer a learner from a different source domain. Previous deep domain
adaptation methods mainly learn a global domain shift, i.e., align the global
source and target distributions without considering the relationships between
two subdomains within the same category of different domains, leading to
unsatisfying transfer learning performance without capturing the fine-grained
information. Recently, more and more researchers pay attention to Subdomain
Adaptation which focuses on accurately aligning the distributions of the
relevant subdomains. However, most of them are adversarial methods which
contain several loss functions and converge slowly. Based on this, we present
Deep Subdomain Adaptation Network (DSAN) which learns a transfer network by
aligning the relevant subdomain distributions of domain-specific layer
activations across different domains based on a local maximum mean discrepancy
(LMMD). Our DSAN is very simple but effective which does not need adversarial
training and converges fast. The adaptation can be achieved easily with most
feed-forward network models by extending them with LMMD loss, which can be
trained efficiently via back-propagation. Experiments demonstrate that DSAN can
achieve remarkable results on both object recognition tasks and digit
classification tasks. Our code will be available at:
https://github.com/easezyc/deep-transfer-learning
Related papers
- Multi-modal Instance Refinement for Cross-domain Action Recognition [25.734898762987083]
Unsupervised cross-domain action recognition aims at adapting the model trained on an existing labeled source domain to a new unlabeled target domain.
We propose a Multi-modal Instance Refinement (MMIR) method to alleviate the negative transfer based on reinforcement learning.
Our method finally outperforms several other state-of-the-art baselines in cross-domain action recognition on the benchmark EPIC-Kitchens dataset.
arXiv Detail & Related papers (2023-11-24T05:06:28Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Multilevel Knowledge Transfer for Cross-Domain Object Detection [26.105283273950942]
Domain shift is a well known problem where a model trained on a particular domain (source) does not perform well when exposed to samples from a different domain (target)
In this work, we address the domain shift problem for the object detection task.
Our approach relies on gradually removing the domain shift between the source and the target domains.
arXiv Detail & Related papers (2021-08-02T15:24:40Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.