UnifiedABSA: A Unified ABSA Framework Based on Multi-task Instruction
Tuning
- URL: http://arxiv.org/abs/2211.10986v1
- Date: Sun, 20 Nov 2022 14:21:09 GMT
- Title: UnifiedABSA: A Unified ABSA Framework Based on Multi-task Instruction
Tuning
- Authors: Zengzhi Wang, Rui Xia, Jianfei Yu
- Abstract summary: Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained aspect-level sentiment information.
There are many ABSA tasks, and the current dominant paradigm is to train task-specific models for each task.
We present UnifiedABSA, a general-purpose ABSA framework based on multi-task instruction tuning.
- Score: 25.482853330324748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained
aspect-level sentiment information. There are many ABSA tasks, and the current
dominant paradigm is to train task-specific models for each task. However,
application scenarios of ABSA tasks are often diverse. This solution usually
requires a large amount of labeled data from each task to perform excellently.
These dedicated models are separately trained and separately predicted,
ignoring the relationship between tasks. To tackle these issues, we present
UnifiedABSA, a general-purpose ABSA framework based on multi-task instruction
tuning, which can uniformly model various tasks and capture the inter-task
dependency with multi-task learning. Extensive experiments on two benchmark
datasets show that UnifiedABSA can significantly outperform dedicated models on
11 ABSA tasks and show its superiority in terms of data efficiency.
Related papers
- BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-task Large Language Models [0.0]
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs)
BoRA addresses trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors.
Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks.
arXiv Detail & Related papers (2024-07-08T06:38:50Z) - It is Simple Sometimes: A Study On Improving Aspect-Based Sentiment Analysis Performance [3.951769809066429]
We propose PFInstruct, an extension to an instruction learning paradigm by appending an NLP-related task prefix to the task description.
This simple approach leads to improved performance across all tested SemEval subtasks, surpassing previous state-of-the-art (SOTA) on the ATE subtask (Rest14) by +3.28 F1-score, and on the AOOE subtask by an average of +5.43 F1-score.
arXiv Detail & Related papers (2024-05-31T08:57:09Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Musketeer: Joint Training for Multi-task Vision Language Model with Task Explanation Prompts [75.75548749888029]
We present a vision-language model whose parameters are jointly trained on all tasks and fully shared among multiple heterogeneous tasks.
With a single model, Musketeer achieves results comparable to or better than strong baselines trained on single tasks, almost uniformly across multiple tasks.
arXiv Detail & Related papers (2023-05-11T17:57:49Z) - OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist
Models [72.8156832931841]
Generalist models are capable of performing diverse multi-modal tasks in a task-agnostic way within a single model.
We release a generalist model learning system, OFASys, built on top of a declarative task interface named multi-modal instruction.
arXiv Detail & Related papers (2022-12-08T17:07:09Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - A Unified Generative Framework for Aspect-Based Sentiment Analysis [33.911655982545206]
Aspect-based Sentiment Analysis (ABSA) aims to identify the aspect terms, their corresponding sentiment polarities, and the opinion terms.
There exist seven subtasks in ABSA.
In this paper, we redefine every subtask target as a sequence mixed by pointer indexes and sentiment class indexes.
We exploit the pre-training sequence-to-sequence model BART to solve all ABSA subtasks in an end-to-end framework.
arXiv Detail & Related papers (2021-06-08T12:55:22Z) - Exploring Relational Context for Multi-Task Dense Prediction [76.86090370115]
We consider a multi-task environment for dense prediction tasks, represented by a common backbone and independent task-specific heads.
We explore various attention-based contexts, such as global and local, in the multi-task setting.
We propose an Adaptive Task-Relational Context module, which samples the pool of all available contexts for each task pair.
arXiv Detail & Related papers (2021-04-28T16:45:56Z) - Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic
Conditional Random Fields [67.51177964010967]
We compare different models for low resource multi-task sequence tagging that leverage dependencies between label sequences for different tasks.
We find that explicit modeling of inter-dependencies between task predictions outperforms single-task as well as standard multi-task models.
arXiv Detail & Related papers (2020-05-01T07:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.