DiaASQ : A Benchmark of Conversational Aspect-based Sentiment Quadruple
Analysis
- URL: http://arxiv.org/abs/2211.05705v4
- Date: Mon, 22 May 2023 10:49:20 GMT
- Title: DiaASQ : A Benchmark of Conversational Aspect-based Sentiment Quadruple
Analysis
- Authors: Bobo Li, Hao Fei, Fei Li, Yuhan Wu, Jinsong Zhang, Shengqiong Wu,
Jingye Li, Yijiang Liu, Lizi Liao, Tat-Seng Chua and Donghong Ji
- Abstract summary: We introduce DiaASQ, aiming to detect the quadruple of target-aspect-opinion-sentiment in a dialogue.
We manually construct a large-scale high-quality DiaASQ dataset in both Chinese and English languages.
We develop a neural model to benchmark the task, which advances in effectively performing end-to-end quadruple prediction.
- Score: 84.80347062834517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid development of aspect-based sentiment analysis (ABSA) within recent
decades shows great potential for real-world society. The current ABSA works,
however, are mostly limited to the scenario of a single text piece, leaving the
study in dialogue contexts unexplored. To bridge the gap between fine-grained
sentiment analysis and conversational opinion mining, in this work, we
introduce a novel task of conversational aspect-based sentiment quadruple
analysis, namely DiaASQ, aiming to detect the quadruple of
target-aspect-opinion-sentiment in a dialogue. We manually construct a
large-scale high-quality DiaASQ dataset in both Chinese and English languages.
We deliberately develop a neural model to benchmark the task, which advances in
effectively performing end-to-end quadruple prediction, and manages to
incorporate rich dialogue-specific and discourse feature representations for
better cross-utterance quadruple extraction. We hope the new benchmark will
spur more advancements in the sentiment analysis community.
Related papers
- PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis [74.41260927676747]
This paper bridges the gaps by introducing a multimodal conversational Sentiment Analysis (ABSA)
To benchmark the tasks, we construct PanoSent, a dataset annotated both manually and automatically, featuring high quality, large scale, multimodality, multilingualism, multi-scenarios, and covering both implicit and explicit sentiment elements.
To effectively address the tasks, we devise a novel Chain-of-Sentiment reasoning framework, together with a novel multimodal large language model (namely Sentica) and a paraphrase-based verification mechanism.
arXiv Detail & Related papers (2024-08-18T13:51:01Z) - Dynamic Multi-Scale Context Aggregation for Conversational Aspect-Based
Sentiment Quadruple Analysis [4.768182075837568]
DiaASQ aims to extract the quadruple of target-aspect-opinion-sentiment within a dialogue.
Existing work independently encodes each utterance, thereby struggling to capture long-range conversational context.
We propose a novel Dynamic Multi-scale Context Aggregation network (DMCA) to address the challenges.
arXiv Detail & Related papers (2023-09-27T08:17:28Z) - Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis [72.9124467710526]
generative approaches have been proposed to extract all four elements as (one or more) quadruplets from text as a single task.
We propose a unified framework for solving ABSA, and the associated sub-tasks to improve the performance in few-shot scenarios.
arXiv Detail & Related papers (2022-10-12T23:38:57Z) - Aspect Sentiment Quad Prediction as Paraphrase Generation [53.33072918744124]
We introduce the Aspect Sentiment Quad Prediction (ASQP) task, aiming to jointly detect all sentiment elements in quads for a given opinionated sentence.
We propose a novel textscParaphrase modeling paradigm to cast the ASQP task to a paraphrase generation process.
On the other hand, the semantics of the sentiment elements can be fully exploited by learning to generate them in the natural language form.
arXiv Detail & Related papers (2021-10-02T12:57:27Z) - BiERU: Bidirectional Emotional Recurrent Unit for Conversational
Sentiment Analysis [18.1320976106637]
The main difference between conversational sentiment analysis and single sentence sentiment analysis is the existence of context information.
Existing approaches employ complicated deep learning structures to distinguish different parties in a conversation and then model the context information.
We propose a fast, compact and parameter-efficient party-ignorant framework named bidirectional emotional recurrent unit for conversational sentiment analysis.
arXiv Detail & Related papers (2020-05-31T11:13:13Z) - Modeling Long Context for Task-Oriented Dialogue State Generation [51.044300192906995]
We propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model.
Our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long.
In our experiments, our proposed model achieves a 7.03% relative improvement over the baseline, establishing a new state-of-the-art joint goal accuracy of 52.04% on the MultiWOZ 2.0 dataset.
arXiv Detail & Related papers (2020-04-29T11:02:25Z) - A Deep Neural Framework for Contextual Affect Detection [51.378225388679425]
A short and simple text carrying no emotion can represent some strong emotions when reading along with its context.
We propose a Contextual Affect Detection framework which learns the inter-dependence of words in a sentence.
arXiv Detail & Related papers (2020-01-28T05:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.