UniSA: Unified Generative Framework for Sentiment Analysis
- URL: http://arxiv.org/abs/2309.01339v1
- Date: Mon, 4 Sep 2023 03:49:30 GMT
- Title: UniSA: Unified Generative Framework for Sentiment Analysis
- Authors: Zaijing Li, Ting-En Lin, Yuchuan Wu, Meng Liu, Fengxiao Tang, Ming
Zhao, Yongbin Li
- Abstract summary: Sentiment analysis aims to understand people's emotional states and predict emotional categories based on multimodal information.
It consists of several subtasks, such as emotion recognition in conversation (ERC), aspect-based sentiment analysis (ABSA), and multimodal sentiment analysis (MSA)
- Score: 48.78262926516856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sentiment analysis is a crucial task that aims to understand people's
emotional states and predict emotional categories based on multimodal
information. It consists of several subtasks, such as emotion recognition in
conversation (ERC), aspect-based sentiment analysis (ABSA), and multimodal
sentiment analysis (MSA). However, unifying all subtasks in sentiment analysis
presents numerous challenges, including modality alignment, unified
input/output forms, and dataset bias. To address these challenges, we propose a
Task-Specific Prompt method to jointly model subtasks and introduce a
multimodal generative framework called UniSA. Additionally, we organize the
benchmark datasets of main subtasks into a new Sentiment Analysis Evaluation
benchmark, SAEval. We design novel pre-training tasks and training methods to
enable the model to learn generic sentiment knowledge among subtasks to improve
the model's multimodal sentiment perception ability. Our experimental results
show that UniSA performs comparably to the state-of-the-art on all subtasks and
generalizes well to various subtasks in sentiment analysis.
Related papers
- PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis [74.41260927676747]
This paper bridges the gaps by introducing a multimodal conversational Sentiment Analysis (ABSA)
To benchmark the tasks, we construct PanoSent, a dataset annotated both manually and automatically, featuring high quality, large scale, multimodality, multilingualism, multi-scenarios, and covering both implicit and explicit sentiment elements.
To effectively address the tasks, we devise a novel Chain-of-Sentiment reasoning framework, together with a novel multimodal large language model (namely Sentica) and a paraphrase-based verification mechanism.
arXiv Detail & Related papers (2024-08-18T13:51:01Z) - Evaluation of data inconsistency for multi-modal sentiment analysis [20.332527596452625]
Emotion semantic inconsistency is an ubiquitous challenge in multi-modal sentiment analysis.
Our research presents a new challenge and offer valuable insights for the future development of sentiment analysis systems.
arXiv Detail & Related papers (2024-06-05T07:11:56Z) - Syntax-Informed Interactive Model for Comprehensive Aspect-Based
Sentiment Analysis [0.0]
We introduce an innovative model: Syntactic Dependency Enhanced Multi-Task Interaction Architecture (SDEMTIA) for comprehensive ABSA.
Our approach innovatively exploits syntactic knowledge (dependency relations and types) using a specialized Syntactic Dependency Embedded Interactive Network (SDEIN)
We also incorporate a novel and efficient message-passing mechanism within a multi-task learning framework to bolster learning efficacy.
arXiv Detail & Related papers (2023-11-28T16:03:22Z) - UnifiedABSA: A Unified ABSA Framework Based on Multi-task Instruction
Tuning [25.482853330324748]
Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained aspect-level sentiment information.
There are many ABSA tasks, and the current dominant paradigm is to train task-specific models for each task.
We present UnifiedABSA, a general-purpose ABSA framework based on multi-task instruction tuning.
arXiv Detail & Related papers (2022-11-20T14:21:09Z) - Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis [72.9124467710526]
generative approaches have been proposed to extract all four elements as (one or more) quadruplets from text as a single task.
We propose a unified framework for solving ABSA, and the associated sub-tasks to improve the performance in few-shot scenarios.
arXiv Detail & Related papers (2022-10-12T23:38:57Z) - Seeking Subjectivity in Visual Emotion Distribution Learning [93.96205258496697]
Visual Emotion Analysis (VEA) aims to predict people's emotions towards different visual stimuli.
Existing methods often predict visual emotion distribution in a unified network, neglecting the inherent subjectivity in its crowd voting process.
We propose a novel textitSubjectivity Appraise-and-Match Network (SAMNet) to investigate the subjectivity in visual emotion distribution.
arXiv Detail & Related papers (2022-07-25T02:20:03Z) - A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and
Challenges [58.97831696674075]
ABSA aims to analyze and understand people's opinions at the aspect level.
We provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements.
We summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage.
arXiv Detail & Related papers (2022-03-02T12:01:46Z) - A Unified Generative Framework for Aspect-Based Sentiment Analysis [33.911655982545206]
Aspect-based Sentiment Analysis (ABSA) aims to identify the aspect terms, their corresponding sentiment polarities, and the opinion terms.
There exist seven subtasks in ABSA.
In this paper, we redefine every subtask target as a sequence mixed by pointer indexes and sentiment class indexes.
We exploit the pre-training sequence-to-sequence model BART to solve all ABSA subtasks in an end-to-end framework.
arXiv Detail & Related papers (2021-06-08T12:55:22Z) - Targeted aspect based multimodal sentiment analysis:an attention capsule
extraction and multi-head fusion network [0.0]
We propose the targeted aspect-based multimodal sentiment analysis (TABMSA) for the first time.
An attention capsule extraction and multi-head fusion network (EF-Net) on the task of TABMSA is devised.
We evaluate the proposed model on two manually annotated datasets.
arXiv Detail & Related papers (2021-03-13T09:11:24Z) - Transformer-based Multi-Aspect Modeling for Multi-Aspect Multi-Sentiment
Analysis [56.893393134328996]
We propose a novel Transformer-based Multi-aspect Modeling scheme (TMM), which can capture potential relations between multiple aspects and simultaneously detect the sentiment of all aspects in a sentence.
Our method achieves noticeable improvements compared with strong baselines such as BERT and RoBERTa.
arXiv Detail & Related papers (2020-11-01T11:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.