InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis
- URL: http://arxiv.org/abs/2302.08624v6
- Date: Mon, 13 Nov 2023 17:56:19 GMT
- Title: InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis
- Authors: Kevin Scaria and Himanshu Gupta and Siddharth Goyal and Saurabh Arjun
Sawant and Swaroop Mishra and Chitta Baral
- Abstract summary: InstructABSA is an instruction learning paradigm for Aspect-Based Sentiment Analysis (ABSA) subtasks.
Our method introduces positive, negative, and neutral examples to each training sample, and instruction tune the model (Tk-Instruct) for ABSA subtasks, yielding significant performance improvements.
- Score: 58.188050006989144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce InstructABSA, an instruction learning paradigm for Aspect-Based
Sentiment Analysis (ABSA) subtasks. Our method introduces positive, negative,
and neutral examples to each training sample, and instruction tune the model
(Tk-Instruct) for ABSA subtasks, yielding significant performance improvements.
Experimental results on the Sem Eval 2014, 15, and 16 datasets demonstrate that
InstructABSA outperforms the previous state-of-the-art (SOTA) approaches on
Term Extraction (ATE), Sentiment Classification(ATSC) and Sentiment Pair
Extraction (ASPE) subtasks. In particular, InstructABSA outperforms the
previous state-of-the-art (SOTA) on the Rest14 ATE subtask by 5.69% points, the
Rest15 ATSC subtask by 9.59% points, and the Lapt14 AOPE subtask by 3.37%
points, surpassing 7x larger models. We also get competitive results on AOOE,
AOPE, and AOSTE subtasks indicating strong generalization ability to all
subtasks. Exploring sample efficiency reveals that just 50% train data is
required to get competitive results with other instruction tuning approaches.
Lastly, we assess the quality of instructions and observe that InstructABSA's
performance experiences a decline of ~10% when adding misleading examples.
Related papers
- It is Simple Sometimes: A Study On Improving Aspect-Based Sentiment Analysis Performance [3.951769809066429]
We propose PFInstruct, an extension to an instruction learning paradigm by appending an NLP-related task prefix to the task description.
This simple approach leads to improved performance across all tested SemEval subtasks, surpassing previous state-of-the-art (SOTA) on the ATE subtask (Rest14) by +3.28 F1-score, and on the AOOE subtask by an average of +5.43 F1-score.
arXiv Detail & Related papers (2024-05-31T08:57:09Z) - Instruction Tuning with Retrieval-based Examples Ranking for Aspect-based Sentiment Analysis [7.458853474864602]
Aspect-based sentiment analysis (ABSA) identifies sentiment information related to specific aspects and provides deeper market insights to businesses and organizations.
Recent studies have proposed using fixed examples for instruction tuning to reformulate ABSA as a generation task.
This study proposes an instruction learning method with retrieval-based example ranking for ABSA tasks.
arXiv Detail & Related papers (2024-05-28T10:39:10Z) - Augmenting Unsupervised Reinforcement Learning with Self-Reference [63.68018737038331]
Humans possess the ability to draw on past experiences explicitly when learning new tasks.
We propose the Self-Reference (SR) approach, an add-on module explicitly designed to leverage historical information.
Our approach achieves state-of-the-art results in terms of Interquartile Mean (IQM) performance and Optimality Gap reduction on the Unsupervised Reinforcement Learning Benchmark.
arXiv Detail & Related papers (2023-11-16T09:07:34Z) - Hierarchical Decomposition of Prompt-Based Continual Learning:
Rethinking Obscured Sub-optimality [55.88910947643436]
Self-supervised pre-training is essential for handling vast quantities of unlabeled data in practice.
HiDe-Prompt is an innovative approach that explicitly optimize the hierarchical components with an ensemble of task-specific prompts and statistics.
Our experiments demonstrate the superior performance of HiDe-Prompt and its robustness to pre-training paradigms in continual learning.
arXiv Detail & Related papers (2023-10-11T06:51:46Z) - A Weak Supervision Approach for Few-Shot Aspect Based Sentiment [39.33888584498155]
Weak supervision on abundant unlabeled data can be leveraged to improve few-shot performance in sentiment analysis tasks.
We propose a pipeline approach to construct a noisy ABSA dataset, and we use it to adapt a pre-trained sequence-to-sequence model to the ABSA tasks.
Our proposed method preserves the full fine-tuning performance while showing significant improvements (15.84% absolute F1) in the few-shot learning scenario.
arXiv Detail & Related papers (2023-05-19T19:53:54Z) - Instruction Tuned Models are Quick Learners [20.771930945083994]
In this work, we demonstrate the sample efficiency of instruction tuned models over various tasks.
In the STL setting, instruction tuned models equipped with 25% of the downstream train data surpass the SOTA performance on the downstream tasks.
In the MTL setting, an instruction tuned model trained on only 6% of downstream training data achieve SOTA, while using 100% of the training data results in a 3.69% points improvement.
arXiv Detail & Related papers (2023-05-17T22:30:01Z) - Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis [72.9124467710526]
generative approaches have been proposed to extract all four elements as (one or more) quadruplets from text as a single task.
We propose a unified framework for solving ABSA, and the associated sub-tasks to improve the performance in few-shot scenarios.
arXiv Detail & Related papers (2022-10-12T23:38:57Z) - Understanding Pre-trained BERT for Aspect-based Sentiment Analysis [71.40586258509394]
This paper analyzes the pre-trained hidden representations learned from reviews on BERT for tasks in aspect-based sentiment analysis (ABSA)
It is not clear how the general proxy task of (masked) language model trained on unlabeled corpus without annotations of aspects or opinions can provide important features for downstream tasks in ABSA.
arXiv Detail & Related papers (2020-10-31T02:21:43Z) - Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based
Sentiment Analysis [71.40390724765903]
Aspect-based sentiment analysis (ABSA) aims to predict the sentiment towards a specific aspect in the text.
Existing ABSA test sets cannot be used to probe whether a model can distinguish the sentiment of the target aspect from the non-target aspects.
We generate new examples to disentangle the confounding sentiments of the non-target aspects from the target aspect's sentiment.
arXiv Detail & Related papers (2020-09-16T22:38:18Z) - Adversarial Training for Aspect-Based Sentiment Analysis with BERT [3.5493798890908104]
We propose a novel architecture called BERT Adrial Training (BAT) to utilize adversarial training in sentiment analysis.
The proposed model outperforms post-trained BERT in both tasks.
To the best of our knowledge, this is the first study on the application of adversarial training in ABSA.
arXiv Detail & Related papers (2020-01-30T13:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.