Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation
- URL: http://arxiv.org/abs/2008.11360v1
- Date: Wed, 26 Aug 2020 03:18:53 GMT
- Title: Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation
- Authors: Taotao Jing, Ming Shao, Zhengming Ding
- Abstract summary: Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
- Score: 70.45936509510528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial domain adaptation aims to adapt knowledge from a larger and more
diverse source domain to a smaller target domain with less number of classes,
which has attracted appealing attention. Recent practice on domain adaptation
manages to extract effective features by incorporating the pseudo labels for
the target domain to better fight off the cross-domain distribution
divergences. However, it is essential to align target data with only a small
set of source data. In this paper, we develop a novel Discriminative
Cross-Domain Feature Learning (DCDF) framework to iteratively optimize target
labels with a cross-domain graph in a weighted scheme. Specifically, a weighted
cross-domain center loss and weighted cross-domain graph propagation are
proposed to couple unlabeled target data to related source samples for
discriminative cross-domain feature learning, where irrelevant source centers
will be ignored, to alleviate the marginal and conditional disparities
simultaneously. Experimental evaluations on several popular benchmarks
demonstrate the effectiveness of our proposed approach on facilitating the
recognition for the unlabeled target domain, through comparing it to the
state-of-the-art partial domain adaptation approaches.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - Self-training through Classifier Disagreement for Cross-Domain Opinion
Target Extraction [62.41511766918932]
Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental task in opinion mining.
Recent work focus on cross-domain OTE, which is typically encountered in real-world scenarios.
We propose a new SSL approach that opts for selecting target samples whose model output from a domain-specific teacher and student network disagrees on the unlabelled target data.
arXiv Detail & Related papers (2023-02-28T16:31:17Z) - Domain Adaptation for Sentiment Analysis Using Increased Intraclass
Separation [31.410122245232373]
Cross-domain sentiment analysis methods have received significant attention.
We introduce a new domain adaptation method which induces large margins between different classes in an embedding space.
This embedding space is trained to be domain-agnostic by matching the data distributions across the domains.
arXiv Detail & Related papers (2021-07-04T11:39:12Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Deep Residual Correction Network for Partial Domain Adaptation [79.27753273651747]
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain.
This paper proposes an efficiently-implemented Deep Residual Correction Network (DRCN)
Comprehensive experiments on partial, traditional and fine-grained cross-domain visual recognition demonstrate that DRCN is superior to the competitive deep domain adaptation approaches.
arXiv Detail & Related papers (2020-04-10T06:07:16Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.