Dual Mixup Regularized Learning for Adversarial Domain Adaptation
- URL: http://arxiv.org/abs/2007.03141v2
- Date: Thu, 16 Jul 2020 22:01:18 GMT
- Title: Dual Mixup Regularized Learning for Adversarial Domain Adaptation
- Authors: Yuan Wu, Diana Inkpen and Ahmed El-Roby
- Abstract summary: We propose a dual mixup regularized learning (DMRL) method for unsupervised domain adaptation.
DMRL guides the classifier in enhancing consistent predictions in-between samples, and enriches the intrinsic structures of the latent space.
A series of empirical studies on four domain adaptation benchmarks demonstrate that our approach can achieve the state-of-the-art.
- Score: 19.393393465837377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances on unsupervised domain adaptation (UDA) rely on adversarial
learning to disentangle the explanatory and transferable features for domain
adaptation. However, there are two issues with the existing methods. First, the
discriminability of the latent space cannot be fully guaranteed without
considering the class-aware information in the target domain. Second, samples
from the source and target domains alone are not sufficient for
domain-invariant feature extracting in the latent space. In order to alleviate
the above issues, we propose a dual mixup regularized learning (DMRL) method
for UDA, which not only guides the classifier in enhancing consistent
predictions in-between samples, but also enriches the intrinsic structures of
the latent space. The DMRL jointly conducts category and domain mixup
regularizations on pixel level to improve the effectiveness of models. A series
of empirical studies on four domain adaptation benchmarks demonstrate that our
approach can achieve the state-of-the-art.
Related papers
- CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Generative Domain Adaptation for Face Anti-Spoofing [38.12738183385737]
Face anti-spoofing approaches based on unsupervised domain adaption (UDA) have drawn growing attention due to promising performances for target scenarios.
Most existing UDA FAS methods typically fit the trained models to the target domain via aligning the distribution of semantic high-level features.
We propose a novel perspective of UDA FAS that directly fits the target data to the models, stylizes the target data to the source-domain style via image translation, and further feeds the stylized data into the well-trained source model for classification.
arXiv Detail & Related papers (2022-07-20T16:24:57Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection [11.262560426527818]
Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data.
Recently, mainstream approaches perform this task through adversarial learning, yet still suffer from two limitations.
We propose a joint adaptive detection framework (JADF) to address the above challenges.
arXiv Detail & Related papers (2021-09-19T00:27:08Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Class-Incremental Domain Adaptation [56.72064953133832]
We introduce a practical Domain Adaptation (DA) paradigm called Class-Incremental Domain Adaptation (CIDA)
Existing DA methods tackle domain-shift but are unsuitable for learning novel target-domain classes.
Our approach yields superior performance as compared to both DA and CI methods in the CIDA paradigm.
arXiv Detail & Related papers (2020-08-04T07:55:03Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.