FaiMA: Feature-aware In-context Learning for Multi-domain Aspect-based
Sentiment Analysis
- URL: http://arxiv.org/abs/2403.01063v1
- Date: Sat, 2 Mar 2024 02:00:51 GMT
- Title: FaiMA: Feature-aware In-context Learning for Multi-domain Aspect-based
Sentiment Analysis
- Authors: Songhua Yang, Xinke Jiang, Hanjie Zhao, Wenxuan Zeng, Hongde Liu,
Yuxiang Jia
- Abstract summary: Multi-domain aspect-based sentiment analysis (ABSA) seeks to capture fine-grained sentiment across diverse domains.
We propose a novel framework, Feature-aware In-context Learning for Multi-domain ABSA (FaiMA)
FaiMA is a feature-aware mechanism that facilitates adaptive learning in multi-domain ABSA tasks.
- Score: 1.606149016749251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-domain aspect-based sentiment analysis (ABSA) seeks to capture
fine-grained sentiment across diverse domains. While existing research narrowly
focuses on single-domain applications constrained by methodological limitations
and data scarcity, the reality is that sentiment naturally traverses multiple
domains. Although large language models (LLMs) offer a promising solution for
ABSA, it is difficult to integrate effectively with established techniques,
including graph-based models and linguistics, because modifying their internal
architecture is not easy. To alleviate this problem, we propose a novel
framework, Feature-aware In-context Learning for Multi-domain ABSA (FaiMA). The
core insight of FaiMA is to utilize in-context learning (ICL) as a
feature-aware mechanism that facilitates adaptive learning in multi-domain ABSA
tasks. Specifically, we employ a multi-head graph attention network as a text
encoder optimized by heuristic rules for linguistic, domain, and sentiment
features. Through contrastive learning, we optimize sentence representations by
focusing on these diverse features. Additionally, we construct an efficient
indexing mechanism, allowing FaiMA to stably retrieve highly relevant examples
across multiple dimensions for any given input. To evaluate the efficacy of
FaiMA, we build the first multi-domain ABSA benchmark dataset. Extensive
experimental results demonstrate that FaiMA achieves significant performance
improvements in multiple domains compared to baselines, increasing F1 by 2.07%
on average. Source code and data sets are anonymously available at
https://github.com/SupritYoung/FaiMA.
Related papers
- Boosting Large Language Models with Continual Learning for Aspect-based Sentiment Analysis [33.86086075084374]
Aspect-based sentiment analysis (ABSA) is an important subtask of sentiment analysis.
We propose a Large Language Model-based Continual Learning (textttLLM-CL) model for ABSA.
arXiv Detail & Related papers (2024-05-09T02:00:07Z) - Cross-domain Multi-modal Few-shot Object Detection via Rich Text [21.36633828492347]
Cross-modal feature extraction and integration have led to steady performance improvements in few-shot learning tasks.
We study the Cross-Domain few-shot generalization of MM-OD (CDMM-FSOD) and propose a meta-learning based multi-modal few-shot object detection method.
arXiv Detail & Related papers (2024-03-24T15:10:22Z) - Domain-Expanded ASTE: Rethinking Generalization in Aspect Sentiment Triplet Extraction [67.54420015049732]
Aspect Sentiment Triplet Extraction (ASTE) is a challenging task in sentiment analysis, aiming to provide fine-grained insights into human sentiments.
Existing benchmarks are limited to two domains and do not evaluate model performance on unseen domains.
We introduce a domain-expanded benchmark by annotating samples from diverse domains, enabling evaluation of models in both in-domain and out-of-domain settings.
arXiv Detail & Related papers (2023-05-23T18:01:49Z) - Bidirectional Generative Framework for Cross-domain Aspect-based
Sentiment Analysis [68.742820522137]
Cross-domain aspect-based sentiment analysis (ABSA) aims to perform various fine-grained sentiment analysis tasks on a target domain by transferring knowledge from a source domain.
We propose a unified bidirectional generative framework to tackle various cross-domain ABSA tasks.
Our framework trains a generative model in both text-to-label and label-to-text directions.
arXiv Detail & Related papers (2023-05-16T15:02:23Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - Syntax-Guided Domain Adaptation for Aspect-based Sentiment Analysis [23.883810236153757]
Domain adaptation is a popular solution to alleviate the data deficiency issue in new domains by transferring common knowledge across domains.
We propose a novel Syntax-guided Domain Adaptation Model, named SDAM, for more effective cross-domain ABSA.
Our model consistently outperforms the state-of-the-art baselines with respect to Micro-F1 metric for the cross-domain End2End ABSA task.
arXiv Detail & Related papers (2022-11-10T10:09:33Z) - Rethinking Few-Shot Object Detection on a Multi-Domain Benchmark [28.818423712485504]
Multi-dOmain Few-Shot Object Detection (MoFSOD) benchmark consists of 10 datasets from a wide range of domains.
We analyze the impacts of freezing layers, different architectures, and different pre-training datasets on FSOD performance.
arXiv Detail & Related papers (2022-07-22T16:13:22Z) - A Simple Information-Based Approach to Unsupervised Domain-Adaptive
Aspect-Based Sentiment Analysis [58.124424775536326]
We propose a simple but effective technique based on mutual information to extract their term.
Experiment results show that our proposed method outperforms the state-of-the-art methods for cross-domain ABSA by 4.32% Micro-F1.
arXiv Detail & Related papers (2022-01-29T10:18:07Z) - Transformer-based Multi-Aspect Modeling for Multi-Aspect Multi-Sentiment
Analysis [56.893393134328996]
We propose a novel Transformer-based Multi-aspect Modeling scheme (TMM), which can capture potential relations between multiple aspects and simultaneously detect the sentiment of all aspects in a sentence.
Our method achieves noticeable improvements compared with strong baselines such as BERT and RoBERTa.
arXiv Detail & Related papers (2020-11-01T11:06:31Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.