TACIT: A Target-Agnostic Feature Disentanglement Framework for
Cross-Domain Text Classification
- URL: http://arxiv.org/abs/2312.17263v1
- Date: Mon, 25 Dec 2023 02:52:36 GMT
- Title: TACIT: A Target-Agnostic Feature Disentanglement Framework for
Cross-Domain Text Classification
- Authors: Rui Song, Fausto Giunchiglia, Yingji Li, Mingjie Tian, Hao Xu
- Abstract summary: Cross-domain text classification aims to transfer models from label-rich source domains to label-poor target domains.
This paper proposes TACIT, a target domain feature disentanglement framework which adaptively decouples robust and unrobust features.
Our framework achieves comparable results to state-of-the-art baselines while utilizing only source domain data.
- Score: 17.19214732926589
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-domain text classification aims to transfer models from label-rich
source domains to label-poor target domains, giving it a wide range of
practical applications. Many approaches promote cross-domain generalization by
capturing domain-invariant features. However, these methods rely on unlabeled
samples provided by the target domains, which renders the model ineffective
when the target domain is agnostic. Furthermore, the models are easily
disturbed by shortcut learning in the source domain, which also hinders the
improvement of domain generalization ability. To solve the aforementioned
issues, this paper proposes TACIT, a target domain agnostic feature
disentanglement framework which adaptively decouples robust and unrobust
features by Variational Auto-Encoders. Additionally, to encourage the
separation of unrobust features from robust features, we design a feature
distillation task that compels unrobust features to approximate the output of
the teacher. The teacher model is trained with a few easy samples that are easy
to carry potential unknown shortcuts. Experimental results verify that our
framework achieves comparable results to state-of-the-art baselines while
utilizing only source domain data.
Related papers
- Self-training through Classifier Disagreement for Cross-Domain Opinion
Target Extraction [62.41511766918932]
Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental task in opinion mining.
Recent work focus on cross-domain OTE, which is typically encountered in real-world scenarios.
We propose a new SSL approach that opts for selecting target samples whose model output from a domain-specific teacher and student network disagrees on the unlabelled target data.
arXiv Detail & Related papers (2023-02-28T16:31:17Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Cross-domain error minimization for unsupervised domain adaptation [2.9766397696234996]
Unsupervised domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
Previous methods focus on learning domain-invariant features to decrease the discrepancy between the feature distributions and minimize the source error.
We propose a curriculum learning based strategy to select the target samples with more accurate pseudo-labels during training.
arXiv Detail & Related papers (2021-06-29T02:00:29Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.