Deep Adversarial Transition Learning using Cross-Grafted Generative
Stacks
- URL: http://arxiv.org/abs/2009.12028v1
- Date: Fri, 25 Sep 2020 04:25:27 GMT
- Title: Deep Adversarial Transition Learning using Cross-Grafted Generative
Stacks
- Authors: Jinyong Hou, Xuejie Ding, Stephen Cranefield, Jeremiah D. Deng
- Abstract summary: We present a novel "deep adversarial transition learning" (DATL) framework that bridges the domain gap.
We construct variational auto-encoders (VAEs) for the two domains, and form bidirectional transitions by cross-grafting the VAEs' decoder stacks.
generative adversarial networks (GAN) are employed for domain adaptation, mapping the target domain data to the known label space of the source domain.
- Score: 3.756448228784421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current deep domain adaptation methods used in computer vision have mainly
focused on learning discriminative and domain-invariant features across
different domains. In this paper, we present a novel "deep adversarial
transition learning" (DATL) framework that bridges the domain gap by projecting
the source and target domains into intermediate, transitional spaces through
the employment of adjustable, cross-grafted generative network stacks and
effective adversarial learning between transitions. Specifically, we construct
variational auto-encoders (VAE) for the two domains, and form bidirectional
transitions by cross-grafting the VAEs' decoder stacks. Furthermore, generative
adversarial networks (GAN) are employed for domain adaptation, mapping the
target domain data to the known label space of the source domain. The overall
adaptation process hence consists of three phases: feature representation
learning by VAEs, transitions generation, and transitions alignment by GANs.
Experimental results demonstrate that our method outperforms the state-of-the
art on a number of unsupervised domain adaptation benchmarks.
Related papers
- Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [44.06904757181245]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to a different unlabeled target domain.
One fundamental problem for the category level based UDA is the production of pseudo labels for samples in target domain.
We design a two-way center-aware labeling algorithm to produce pseudo labels for target samples.
Along with the pseudo labels, a weight-sharing triple-branch transformer framework is proposed to apply self-attention and cross-attention for source/target feature learning and source-target domain alignment.
arXiv Detail & Related papers (2021-09-13T17:59:07Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Unsupervised Cross-domain Image Classification by Distance Metric Guided
Feature Alignment [11.74643883335152]
Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain.
We propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains.
Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain.
arXiv Detail & Related papers (2020-08-19T13:36:57Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Unsupervised Domain Adaptation with Progressive Domain Augmentation [34.887690018011675]
We propose a novel unsupervised domain adaptation method based on progressive domain augmentation.
The proposed method generates virtual intermediate domains via domain, progressively augments the source domain and bridges the source-target domain divergence.
We conduct experiments on multiple domain adaptation tasks and the results shows the proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-04-03T18:45:39Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.