Vicinal and categorical domain adaptation
- URL: http://arxiv.org/abs/2103.03460v1
- Date: Fri, 5 Mar 2021 03:47:24 GMT
- Title: Vicinal and categorical domain adaptation
- Authors: Hui Tang and Kui Jia
- Abstract summary: We propose novel losses of adversarial training at both domain and category levels.
We propose a concept of vicinal domains whose instances are produced by a convex combination of pairs of instances respectively from the two domains.
- Score: 43.707303372718336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation aims to learn a task classifier that performs
well on the unlabeled target domain, by utilizing the labeled source domain.
Inspiring results have been acquired by learning domain-invariant deep features
via domain-adversarial training. However, its parallel design of task and
domain classifiers limits the ability to achieve a finer category-level domain
alignment. To promote categorical domain adaptation (CatDA), based on a joint
category-domain classifier, we propose novel losses of adversarial training at
both domain and category levels. Since the joint classifier can be regarded as
a concatenation of individual task classifiers respectively for the two
domains, our design principle is to enforce consistency of category predictions
between the two task classifiers. Moreover, we propose a concept of vicinal
domains whose instances are produced by a convex combination of pairs of
instances respectively from the two domains. Intuitively, alignment of the
possibly infinite number of vicinal domains enhances that of original domains.
We propose novel adversarial losses for vicinal domain adaptation (VicDA) based
on CatDA, leading to Vicinal and Categorical Domain Adaptation (ViCatDA). We
also propose Target Discriminative Structure Recovery (TDSR) to recover the
intrinsic target discrimination damaged by adversarial feature alignment. We
also analyze the principles underlying the ability of our key designs to align
the joint distributions. Extensive experiments on several benchmark datasets
demonstrate that we achieve the new state of the art.
Related papers
- CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Birds of A Feather Flock Together: Category-Divergence Guidance for
Domain Adaptive Segmentation [35.63920597305474]
Unsupervised domain adaptation (UDA) aims to enhance the generalization capability of a certain model from a source domain to a target domain.
In this work, we propose an Inter-class Separation and Intra-class Aggregation (ISIA) mechanism.
By measuring the align complexity of each category, we design an Adaptive-weighted Instance Matching (AIM) strategy to further optimize the instance-level adaptation.
arXiv Detail & Related papers (2022-04-05T11:17:19Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Adversarial Consistent Learning on Partial Domain Adaptation of
PlantCLEF 2020 Challenge [26.016647703500883]
We develop adversarial consistent learning ($ACL$) in a unified deep architecture for partial domain adaptation.
It consists of source domain classification loss, adversarial learning loss, and feature consistency loss.
We find the shared categories of two domains via down-weighting the irrelevant categories in the source domain.
arXiv Detail & Related papers (2020-09-19T19:57:41Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Universal Domain Adaptation through Self Supervision [75.04598763659969]
Unsupervised domain adaptation methods assume that all source categories are present in the target domain.
We propose Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE) to handle arbitrary category shift.
We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings.
arXiv Detail & Related papers (2020-02-19T01:26:11Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.