From Big to Small: Adaptive Learning to Partial-Set Domains
- URL: http://arxiv.org/abs/2203.07375v1
- Date: Mon, 14 Mar 2022 07:02:45 GMT
- Title: From Big to Small: Adaptive Learning to Partial-Set Domains
- Authors: Zhangjie Cao, Kaichao You, Ziyang Zhang, Jianmin Wang, Mingsheng Long
- Abstract summary: Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
- Score: 94.92635970450578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain adaptation targets at knowledge acquisition and dissemination from a
labeled source domain to an unlabeled target domain under distribution shift.
Still, the common requirement of identical class space shared across domains
hinders applications of domain adaptation to partial-set domains. Recent
advances show that deep pre-trained models of large scale endow rich knowledge
to tackle diverse downstream tasks of small scale. Thus, there is a strong
incentive to adapt models from large-scale domains to small-scale domains. This
paper introduces Partial Domain Adaptation (PDA), a learning paradigm that
relaxes the identical class space assumption to that the source class space
subsumes the target class space. First, we present a theoretical analysis of
partial domain adaptation, which uncovers the importance of estimating the
transferable probability of each class and each instance across domains. Then,
we propose Selective Adversarial Network (SAN and SAN++) with a bi-level
selection strategy and an adversarial adaptation mechanism. The bi-level
selection strategy up-weighs each class and each instance simultaneously for
source supervised training, target self-training, and source-target adversarial
adaptation through the transferable probability estimated alternately by the
model. Experiments on standard partial-set datasets and more challenging tasks
with superclasses show that SAN++ outperforms several domain adaptation
methods.
Related papers
- Contrastive Adversarial Training for Unsupervised Domain Adaptation [2.432037584128226]
Domain adversarial training has been successfully adopted for various domain adaptation tasks.
Large models make adversarial training being easily biased towards source domain and hardly adapted to target domain.
We propose contrastive adversarial training (CAT) approach that leverages the labeled source domain samples to reinforce and regulate the feature generation for target domain.
arXiv Detail & Related papers (2024-07-17T17:59:21Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - MemSAC: Memory Augmented Sample Consistency for Large Scale Unsupervised
Domain Adaptation [71.4942277262067]
We propose MemSAC, which exploits sample level similarity across source and target domains to achieve discriminative transfer.
We provide in-depth analysis and insights into the effectiveness of MemSAC.
arXiv Detail & Related papers (2022-07-25T17:55:28Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Adversarial Network with Multiple Classifiers for Open Set Domain
Adaptation [9.251407403582501]
This paper focuses on the type of open set domain adaptation setting where the target domain has both private ('unknown classes') label space and the shared ('known classes') label space.
Prevalent distribution-matching domain adaptation methods are inadequate in such a setting.
We propose a novel adversarial domain adaptation model with multiple auxiliary classifiers.
arXiv Detail & Related papers (2020-07-01T11:23:07Z) - Deep Residual Correction Network for Partial Domain Adaptation [79.27753273651747]
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain.
This paper proposes an efficiently-implemented Deep Residual Correction Network (DRCN)
Comprehensive experiments on partial, traditional and fine-grained cross-domain visual recognition demonstrate that DRCN is superior to the competitive deep domain adaptation approaches.
arXiv Detail & Related papers (2020-04-10T06:07:16Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.