Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation
- URL: http://arxiv.org/abs/2008.11873v1
- Date: Thu, 27 Aug 2020 00:53:43 GMT
- Title: Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation
- Authors: Taotao Jing, Haifeng Xia, Zhengming Ding
- Abstract summary: Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
- Score: 66.74638960925854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial domain adaptation (PDA) attracts appealing attention as it deals with
a realistic and challenging problem when the source domain label space
substitutes the target domain. Most conventional domain adaptation (DA) efforts
concentrate on learning domain-invariant features to mitigate the distribution
disparity across domains. However, it is crucial to alleviate the negative
influence caused by the irrelevant source domain categories explicitly for PDA.
In this work, we propose an Adaptively-Accumulated Knowledge Transfer framework
(A$^2$KT) to align the relevant categories across two domains for effective
domain adaptation. Specifically, an adaptively-accumulated mechanism is
explored to gradually filter out the most confident target samples and their
corresponding source categories, promoting positive transfer with more
knowledge across two domains. Moreover, a dual distinct classifier architecture
consisting of a prototype classifier and a multilayer perceptron classifier is
built to capture intrinsic data distribution knowledge across domains from
various perspectives. By maximizing the inter-class center-wise discrepancy and
minimizing the intra-class sample-wise compactness, the proposed model is able
to obtain more domain-invariant and task-specific discriminative
representations of the shared categories data. Comprehensive experiments on
several partial domain adaptation benchmarks demonstrate the effectiveness of
our proposed model, compared with the state-of-the-art PDA methods.
Related papers
- CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Class Conditional Alignment for Partial Domain Adaptation [10.506584969668792]
Adrial adaptation models have demonstrated significant progress towards transferring knowledge from a labeled source dataset to an unlabeled target dataset.
PDA investigates the scenarios in which the source domain is large and diverse, and the target label space is a subset of the source label space.
We propose a multi-class adversarial architecture for PDA.
arXiv Detail & Related papers (2020-03-14T23:51:57Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.