Class Conditional Alignment for Partial Domain Adaptation
- URL: http://arxiv.org/abs/2003.06722v1
- Date: Sat, 14 Mar 2020 23:51:57 GMT
- Title: Class Conditional Alignment for Partial Domain Adaptation
- Authors: Mohsen Kheirandishfard, Fariba Zohrizadeh, Farhad Kamangar
- Abstract summary: Adrial adaptation models have demonstrated significant progress towards transferring knowledge from a labeled source dataset to an unlabeled target dataset.
PDA investigates the scenarios in which the source domain is large and diverse, and the target label space is a subset of the source label space.
We propose a multi-class adversarial architecture for PDA.
- Score: 10.506584969668792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial adaptation models have demonstrated significant progress towards
transferring knowledge from a labeled source dataset to an unlabeled target
dataset. Partial domain adaptation (PDA) investigates the scenarios in which
the source domain is large and diverse, and the target label space is a subset
of the source label space. The main purpose of PDA is to identify the shared
classes between the domains and promote learning transferable knowledge from
these classes. In this paper, we propose a multi-class adversarial architecture
for PDA. The proposed approach jointly aligns the marginal and
class-conditional distributions in the shared label space by minimaxing a novel
multi-class adversarial loss function. Furthermore, we incorporate effective
regularization terms to encourage selecting the most relevant subset of source
domain classes. In the absence of target labels, the proposed approach is able
to effectively learn domain-invariant feature representations, which in turn
can enhance the classification performance in the target domain. Comprehensive
experiments on three benchmark datasets Office-31, Office-Home, and
Caltech-Office corroborate the effectiveness of the proposed approach in
addressing different partial transfer learning tasks.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.