A Multi-turn Machine Reading Comprehension Framework with Rethink
Mechanism for Emotion-Cause Pair Extraction
- URL: http://arxiv.org/abs/2209.07972v1
- Date: Fri, 16 Sep 2022 14:38:58 GMT
- Title: A Multi-turn Machine Reading Comprehension Framework with Rethink
Mechanism for Emotion-Cause Pair Extraction
- Authors: Changzhi Zhou, Dandan Song, Jing Xu, Zhijing Wu
- Abstract summary: Emotion-cause pair extraction (ECPE) is an emerging task in emotion cause analysis.
We propose a Multi-turn MRC framework with Rethink mechanism (MM-R) to tackle the ECPE task.
Our framework can model complicated relations between emotions and causes while avoiding generating the pairing matrix.
- Score: 6.6564045064972825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotion-cause pair extraction (ECPE) is an emerging task in emotion cause
analysis, which extracts potential emotion-cause pairs from an emotional
document. Most recent studies use end-to-end methods to tackle the ECPE task.
However, these methods either suffer from a label sparsity problem or fail to
model complicated relations between emotions and causes. Furthermore, they all
do not consider explicit semantic information of clauses. To this end, we
transform the ECPE task into a document-level machine reading comprehension
(MRC) task and propose a Multi-turn MRC framework with Rethink mechanism
(MM-R). Our framework can model complicated relations between emotions and
causes while avoiding generating the pairing matrix (the leading cause of the
label sparsity problem). Besides, the multi-turn structure can fuse explicit
semantic information flow between emotions and causes. Extensive experiments on
the benchmark emotion cause corpus demonstrate the effectiveness of our
proposed framework, which outperforms existing state-of-the-art methods.
Related papers
- Generative Emotion Cause Explanation in Multimodal Conversations [23.39751445330256]
We propose a new task, textbfMultimodal textbfConversation textbfEmotion textbfCause textbfExplanation (MCECE)
It aims to generate a detailed explanation of the emotional cause to the target utterance within a multimodal conversation scenario.
A novel approach, FAME-Net, is proposed, that harnesses the power of Large Language Models (LLMs) to analyze visual data and accurately interpret the emotions conveyed through facial expressions in videos.
arXiv Detail & Related papers (2024-11-01T09:16:30Z) - PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis [74.41260927676747]
This paper bridges the gaps by introducing a multimodal conversational Sentiment Analysis (ABSA)
To benchmark the tasks, we construct PanoSent, a dataset annotated both manually and automatically, featuring high quality, large scale, multimodality, multilingualism, multi-scenarios, and covering both implicit and explicit sentiment elements.
To effectively address the tasks, we devise a novel Chain-of-Sentiment reasoning framework, together with a novel multimodal large language model (namely Sentica) and a paraphrase-based verification mechanism.
arXiv Detail & Related papers (2024-08-18T13:51:01Z) - Think out Loud: Emotion Deducing Explanation in Dialogues [57.90554323226896]
We propose a new task "Emotion Deducing Explanation in Dialogues" (EDEN)
EDEN recognizes emotion and causes in an explicitly thinking way.
It can help Large Language Models (LLMs) achieve better recognition of emotions and causes.
arXiv Detail & Related papers (2024-06-07T08:58:29Z) - ECR-Chain: Advancing Generative Language Models to Better Emotion-Cause Reasoners through Reasoning Chains [61.50113532215864]
Causal Emotion Entailment (CEE) aims to identify the causal utterances in a conversation that stimulate the emotions expressed in a target utterance.
Current works in CEE mainly focus on modeling semantic and emotional interactions in conversations.
We introduce a step-by-step reasoning method, Emotion-Cause Reasoning Chain (ECR-Chain), to infer the stimulus from the target emotional expressions in conversations.
arXiv Detail & Related papers (2024-05-17T15:45:08Z) - UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion Cause [18.99103120856208]
We propose a Unified Multimodal Emotion recognition and Emotion-Cause analysis framework (UniMEEC) to explore the causality between emotion and emotion cause.
UniMEEC reformulates the MERC and MECPE tasks as mask prediction problems and unifies them with a causal prompt template.
Experiment results on four public benchmark datasets verify the model performance on MERC and MECPE tasks.
arXiv Detail & Related papers (2024-03-30T15:59:17Z) - ECQED: Emotion-Cause Quadruple Extraction in Dialogs [37.66816413841564]
We present Emotion-Cause Quadruple Extraction in Dialogs (ECQED), which requires detecting emotion-cause utterance pairs and emotion and cause types.
We show that introducing the fine-grained emotion and cause features evidently helps better dialog generation.
arXiv Detail & Related papers (2023-06-06T19:04:30Z) - Unsupervised Extractive Summarization of Emotion Triggers [56.50078267340738]
We develop new unsupervised learning models that can jointly detect emotions and summarize their triggers.
Our best approach, entitled Emotion-Aware Pagerank, incorporates emotion information from external sources combined with a language understanding module.
arXiv Detail & Related papers (2023-06-02T11:07:13Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Emotion-aware Chat Machine: Automatic Emotional Response Generation for
Human-like Emotional Interaction [55.47134146639492]
This article proposes a unifed end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post.
Experiments on real-world data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of both content coherence and emotion appropriateness.
arXiv Detail & Related papers (2021-06-06T06:26:15Z) - ECSP: A New Task for Emotion-Cause Span-Pair Extraction and
Classification [0.9137554315375922]
We propose a new task: Emotion-Cause Span-Pair extraction and classification (ECSP)
ECSP aims to extract the potential span-pair of emotion and corresponding causes in a document, and make emotion classification for each pair.
We propose a span-based extract-then-classify (ETC) model, where emotion and cause are directly extracted and paired from the document.
arXiv Detail & Related papers (2020-03-07T03:36:47Z) - End-to-end Emotion-Cause Pair Extraction via Learning to Link [18.741585103275334]
Emotion-cause pair extraction (ECPE) aims at jointly investigating emotions and their underlying causes in documents.
Existing approaches to ECPE generally adopt a two-stage method, i.e., (1) emotion and cause detection, and then (2) pairing the detected emotions and causes.
We propose a multi-task learning model that can extract emotions, causes and emotion-cause pairs simultaneously in an end-to-end manner.
arXiv Detail & Related papers (2020-02-25T07:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.