Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
- URL: http://arxiv.org/abs/2404.16905v1
- Date: Thu, 25 Apr 2024 11:52:21 GMT
- Title: Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
- Authors: Shen Zhang, Haojie Zhang, Jing Zhang, Xudong Zhang, Yimeng Zhuang, Jinting Wu,
- Abstract summary: In human-computer interaction, it is crucial for agents to respond to human by understanding their emotions.
New task named Multimodal Emotion-Cause Pair Extraction in Conversations is responsible for recognizing emotion and identifying causal expressions.
We propose a multi-stage framework to generate emotion and extract the emotion causal pairs given the target emotion.
- Score: 12.095837596104552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In human-computer interaction, it is crucial for agents to respond to human by understanding their emotions. Unraveling the causes of emotions is more challenging. A new task named Multimodal Emotion-Cause Pair Extraction in Conversations is responsible for recognizing emotion and identifying causal expressions. In this study, we propose a multi-stage framework to generate emotion and extract the emotion causal pairs given the target emotion. In the first stage, Llama-2-based InstructERC is utilized to extract the emotion category of each utterance in a conversation. After emotion recognition, a two-stream attention model is employed to extract the emotion causal pairs given the target emotion for subtask 2 while MuTEC is employed to extract causal span for subtask 1. Our approach achieved first place for both of the two subtasks in the competition.
Related papers
- Think out Loud: Emotion Deducing Explanation in Dialogues [57.90554323226896]
We propose a new task "Emotion Deducing Explanation in Dialogues" (EDEN)
EDEN recognizes emotion and causes in an explicitly thinking way.
It can help Large Language Models (LLMs) achieve better recognition of emotions and causes.
arXiv Detail & Related papers (2024-06-07T08:58:29Z) - SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations [53.60993109543582]
SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, aims at extracting all pairs of emotions and their corresponding causes from conversations.
Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE)
In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.
arXiv Detail & Related papers (2024-05-19T09:59:00Z) - ECR-Chain: Advancing Generative Language Models to Better Emotion-Cause Reasoners through Reasoning Chains [61.50113532215864]
Causal Emotion Entailment (CEE) aims to identify the causal utterances in a conversation that stimulate the emotions expressed in a target utterance.
Current works in CEE mainly focus on modeling semantic and emotional interactions in conversations.
We introduce a step-by-step reasoning method, Emotion-Cause Reasoning Chain (ECR-Chain), to infer the stimulus from the target emotional expressions in conversations.
arXiv Detail & Related papers (2024-05-17T15:45:08Z) - CauESC: A Causal Aware Model for Emotional Support Conversation [79.4451588204647]
Existing approaches ignore the emotion causes of the distress.
They focus on the seeker's own mental state rather than the emotional dynamics during interaction between speakers.
We propose a novel framework CauESC, which firstly recognizes the emotion causes of the distress, as well as the emotion effects triggered by the causes.
arXiv Detail & Related papers (2024-01-31T11:30:24Z) - Where are We in Event-centric Emotion Analysis? Bridging Emotion Role
Labeling and Appraisal-based Approaches [10.736626320566707]
The term emotion analysis in text subsumes various natural language processing tasks.
We argue that emotions and events are related in two ways.
We discuss how to incorporate psychological appraisal theories in NLP models to interpret events.
arXiv Detail & Related papers (2023-09-05T09:56:29Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - Multimodal Emotion-Cause Pair Extraction in Conversations [23.95461291718006]
We introduce a new task named Multimodal Emotion-Cause Pair Extraction in Conversations.
We aim to jointly extract emotions and their associated causes from conversations reflected in texts, audio and video.
Preliminary experimental results demonstrate the potential of multimodal information fusion for discovering both emotions and causes in conversations.
arXiv Detail & Related papers (2021-10-15T11:30:24Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Multi-Task Learning and Adapted Knowledge Models for Emotion-Cause
Extraction [18.68808042388714]
We present solutions that tackle both emotion recognition and emotion cause detection in a joint fashion.
Considering that common-sense knowledge plays an important role in understanding implicitly expressed emotions, we propose novel methods.
We show performance improvement on both tasks when including common-sense reasoning and a multitask framework.
arXiv Detail & Related papers (2021-06-17T20:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.