Dynamic Instance Domain Adaptation
- URL: http://arxiv.org/abs/2203.05028v1
- Date: Wed, 9 Mar 2022 20:05:54 GMT
- Title: Dynamic Instance Domain Adaptation
- Authors: Zhongying Deng, Kaiyang Zhou, Da Li, Junjun He, Yi-Zhe Song, Tao Xiang
- Abstract summary: Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
- Score: 109.53575039217094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing studies on unsupervised domain adaptation (UDA) assume that
each domain's training samples come with domain labels (e.g., painting, photo).
Samples from each domain are assumed to follow the same distribution and the
domain labels are exploited to learn domain-invariant features via feature
alignment. However, such an assumption often does not hold true -- there often
exist numerous finer-grained domains (e.g., dozens of modern painting styles
have been developed, each differing dramatically from those of the classic
styles). Therefore, forcing feature distribution alignment across each
artificially-defined and coarse-grained domain can be ineffective. In this
paper, we address both single-source and multi-source UDA from a completely
different perspective, which is to view each instance as a fine domain. Feature
alignment across domains is thus redundant. Instead, we propose to perform
dynamic instance domain adaptation (DIDA). Concretely, a dynamic neural network
with adaptive convolutional kernels is developed to generate instance-adaptive
residuals to adapt domain-agnostic deep features to each individual instance.
This enables a shared classifier to be applied to both source and target domain
data without relying on any domain annotation. Further, instead of imposing
intricate feature alignment losses, we adopt a simple semi-supervised learning
paradigm using only a cross-entropy loss for both labeled source and pseudo
labeled target data. Our model, dubbed DIDA-Net, achieves state-of-the-art
performance on several commonly used single-source and multi-source UDA
datasets including Digits, Office-Home, DomainNet, Digit-Five, and PACS.
Related papers
- Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - Aligning Domain-specific Distribution and Classifier for Cross-domain
Classification from Multiple Sources [25.204055330850164]
We propose a new framework with two alignment stages for Unsupervised Domain Adaptation.
Our method can achieve remarkable results on popular benchmark datasets for image classification.
arXiv Detail & Related papers (2022-01-04T06:35:11Z) - Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification [57.92800886719651]
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
arXiv Detail & Related papers (2021-06-16T07:29:27Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Dynamic Transfer for Multi-Source Domain Adaptation [82.54405157719641]
We present dynamic transfer to address domain conflicts, where the model parameters are adapted to samples.
It breaks down source domain barriers and turns multi-source domains into a single-source domain.
Experimental results show that, without using domain labels, our dynamic transfer outperforms the state-of-the-art method by more than 3%.
arXiv Detail & Related papers (2021-03-19T01:22:12Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.