A Label Proportions Estimation Technique for Adversarial Domain
Adaptation in Text Classification
- URL: http://arxiv.org/abs/2003.07444v3
- Date: Thu, 26 Mar 2020 08:13:52 GMT
- Title: A Label Proportions Estimation Technique for Adversarial Domain
Adaptation in Text Classification
- Authors: Zhuohao Chen, Singla Karan, David C. Atkins, Zac E Imel, Shrikanth
Narayanan
- Abstract summary: We introduce a domain adversarial network with label proportions estimation (DAN-LPE) framework.
The DAN-LPE simultaneously trains a domain adversarial net and processes label proportions estimation by the confusion of the source domain and the predictions of the target domain.
Experiments show the DAN-LPE achieves a good estimate of the target label distributions and reduces the label shift to improve the classification performance.
- Score: 31.788796579355274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many text classification tasks are domain-dependent, and various domain
adaptation approaches have been proposed to predict unlabeled data in a new
domain. Domain-adversarial neural networks (DANN) and their variants have been
used widely recently and have achieved promising results for this problem.
However, most of these approaches assume that the label proportions of the
source and target domains are similar, which rarely holds in most real-world
scenarios. Sometimes the label shift can be large and the DANN fails to learn
domain-invariant features. In this study, we focus on unsupervised domain
adaptation of text classification with label shift and introduce a domain
adversarial network with label proportions estimation (DAN-LPE) framework. The
DAN-LPE simultaneously trains a domain adversarial net and processes label
proportions estimation by the confusion of the source domain and the
predictions of the target domain. Experiments show the DAN-LPE achieves a good
estimate of the target label distributions and reduces the label shift to
improve the classification performance.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - Preserving Semantic Consistency in Unsupervised Domain Adaptation Using
Generative Adversarial Networks [33.84004077585957]
We propose an end-to-end novel semantic consistent generative adversarial network (SCGAN)
This network can achieve source to target domain matching by capturing semantic information at the feature level.
We demonstrate the robustness of our proposed method which exceeds the state-of-the-art performance in unsupervised domain adaptation settings.
arXiv Detail & Related papers (2021-04-28T12:23:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Adversarial Network with Multiple Classifiers for Open Set Domain
Adaptation [9.251407403582501]
This paper focuses on the type of open set domain adaptation setting where the target domain has both private ('unknown classes') label space and the shared ('known classes') label space.
Prevalent distribution-matching domain adaptation methods are inadequate in such a setting.
We propose a novel adversarial domain adaptation model with multiple auxiliary classifiers.
arXiv Detail & Related papers (2020-07-01T11:23:07Z) - Domain Adaptation and Image Classification via Deep Conditional
Adaptation Network [26.09932710494144]
Unsupervised domain adaptation aims to generalize the supervised model trained on a source domain to an unlabeled target domain.
Marginal distribution alignment of feature spaces is widely used to reduce the domain discrepancy between the source and target domains.
We propose a novel unsupervised domain adaptation method, Deep Conditional Adaptation Network (DCAN), based on conditional distribution alignment of feature spaces.
arXiv Detail & Related papers (2020-06-14T02:56:01Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.