FOAL: Fine-grained Contrastive Learning for Cross-domain Aspect
Sentiment Triplet Extraction
- URL: http://arxiv.org/abs/2311.10373v1
- Date: Fri, 17 Nov 2023 07:56:01 GMT
- Title: FOAL: Fine-grained Contrastive Learning for Cross-domain Aspect
Sentiment Triplet Extraction
- Authors: Ting Xu, Zhen Wu, Huiyun Yang, Xinyu Dai
- Abstract summary: Aspect Sentiment Triplet Extraction (ASTE) has achieved promising results while relying on sufficient annotation data in a specific domain.
We propose to explore ASTE in the cross-domain setting, which transfers knowledge from a resource-rich source domain to a resource-poor target domain.
To effectively transfer the knowledge across domains and extract the sentiment triplets accurately, we propose a method named Fine-grained cOntrAstive Learning.
- Score: 28.49399937940077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Aspect Sentiment Triplet Extraction (ASTE) has achieved promising results
while relying on sufficient annotation data in a specific domain. However, it
is infeasible to annotate data for each individual domain. We propose to
explore ASTE in the cross-domain setting, which transfers knowledge from a
resource-rich source domain to a resource-poor target domain, thereby
alleviating the reliance on labeled data in the target domain. To effectively
transfer the knowledge across domains and extract the sentiment triplets
accurately, we propose a method named Fine-grained cOntrAstive Learning (FOAL)
to reduce the domain discrepancy and preserve the discriminability of each
category. Experiments on six transfer pairs show that FOAL achieves 6%
performance gains and reduces the domain discrepancy significantly compared
with strong baselines. Our code will be publicly available once accepted.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - Prior Omission of Dissimilar Source Domain(s) for Cost-Effective
Few-Shot Learning [24.647313693814798]
Few-shot slot tagging is an emerging research topic in the field of Natural Language Understanding (NLU)
With sufficient annotated data from source domains, the key challenge is how to train and adapt the model to another target domain which only has few labels.
arXiv Detail & Related papers (2021-09-11T09:30:59Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Disentanglement-based Cross-Domain Feature Augmentation for Effective
Unsupervised Domain Adaptive Person Re-identification [87.72851934197936]
Unsupervised domain adaptive (UDA) person re-identification (ReID) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain for person matching.
One challenge is how to generate target domain samples with reliable labels for training.
We propose a Disentanglement-based Cross-Domain Feature Augmentation strategy.
arXiv Detail & Related papers (2021-03-25T15:28:41Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Sparsely-Labeled Source Assisted Domain Adaptation [64.75698236688729]
This paper proposes a novel Sparsely-Labeled Source Assisted Domain Adaptation (SLSA-DA) algorithm.
Due to the label scarcity problem, the projected clustering is conducted on both the source and target domains.
arXiv Detail & Related papers (2020-05-08T15:37:35Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z) - Multi-source Domain Adaptation for Visual Sentiment Classification [92.53780541232773]
We propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN)
To handle data from multiple source domains, MSGAN learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution.
Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.
arXiv Detail & Related papers (2020-01-12T08:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.