Multimodal Emotion-Cause Pair Extraction in Conversations
- URL: http://arxiv.org/abs/2110.08020v1
- Date: Fri, 15 Oct 2021 11:30:24 GMT
- Title: Multimodal Emotion-Cause Pair Extraction in Conversations
- Authors: Fanfan Wang, Zixiang Ding, Rui Xia, Zhaoyu Li and Jianfei Yu
- Abstract summary: We introduce a new task named Multimodal Emotion-Cause Pair Extraction in Conversations.
We aim to jointly extract emotions and their associated causes from conversations reflected in texts, audio and video.
Preliminary experimental results demonstrate the potential of multimodal information fusion for discovering both emotions and causes in conversations.
- Score: 23.95461291718006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotion cause analysis has received considerable attention in recent years.
Previous studies primarily focused on emotion cause extraction from texts in
news articles or microblogs. It is also interesting to discover emotions and
their causes in conversations. As conversation in its natural form is
multimodal, a large number of studies have been carried out on multimodal
emotion recognition in conversations, but there is still a lack of work on
multimodal emotion cause analysis. In this work, we introduce a new task named
Multimodal Emotion-Cause Pair Extraction in Conversations, aiming to jointly
extract emotions and their associated causes from conversations reflected in
multiple modalities (text, audio and video). We accordingly construct a
multimodal conversational emotion cause dataset, Emotion-Cause-in-Friends,
which contains 9,272 multimodal emotion-cause pairs annotated on 13,509
utterances in the sitcom Friends. We finally benchmark the task by establishing
a baseline system that incorporates multimodal features for emotion-cause pair
extraction. Preliminary experimental results demonstrate the potential of
multimodal information fusion for discovering both emotions and causes in
conversations.
Related papers
- Think out Loud: Emotion Deducing Explanation in Dialogues [57.90554323226896]
We propose a new task "Emotion Deducing Explanation in Dialogues" (EDEN)
EDEN recognizes emotion and causes in an explicitly thinking way.
It can help Large Language Models (LLMs) achieve better recognition of emotions and causes.
arXiv Detail & Related papers (2024-06-07T08:58:29Z) - SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations [53.60993109543582]
SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, aims at extracting all pairs of emotions and their corresponding causes from conversations.
Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE)
In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.
arXiv Detail & Related papers (2024-05-19T09:59:00Z) - Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations [12.095837596104552]
In human-computer interaction, it is crucial for agents to respond to human by understanding their emotions.
New task named Multimodal Emotion-Cause Pair Extraction in Conversations is responsible for recognizing emotion and identifying causal expressions.
We propose a multi-stage framework to generate emotion and extract the emotion causal pairs given the target emotion.
arXiv Detail & Related papers (2024-04-25T11:52:21Z) - LastResort at SemEval-2024 Task 3: Exploring Multimodal Emotion Cause Pair Extraction as Sequence Labelling Task [3.489826905722736]
SemEval 2024 introduces the task of Multimodal Emotion Cause Analysis in Conversations.
This paper proposes models that tackle this task as an utterance labeling and a sequence labeling problem.
In the official leaderboard for the task, our architecture was ranked 8th, achieving an F1-score of 0.1759 on the leaderboard.
arXiv Detail & Related papers (2024-04-02T16:32:49Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - Multi-Task Learning Framework for Extracting Emotion Cause Span and
Entailment in Conversations [3.2260643152341095]
We propose neural models to extract emotion cause span and entailment in conversations.
MuTEC is an end-to-end Multi-Task learning framework for extracting emotions, emotion cause, and entailment in conversations.
arXiv Detail & Related papers (2022-11-07T18:14:45Z) - Why Do You Feel This Way? Summarizing Triggers of Emotions in Social
Media Posts [61.723046082145416]
We introduce CovidET (Emotions and their Triggers during Covid-19), a dataset of 1,900 English Reddit posts related to COVID-19.
We develop strong baselines to jointly detect emotions and summarize emotion triggers.
Our analyses show that CovidET presents new challenges in emotion-specific summarization, as well as multi-emotion detection in long social media posts.
arXiv Detail & Related papers (2022-10-22T19:10:26Z) - M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database [139.08528216461502]
We propose a Multi-modal Multi-scene Multi-label Emotional Dialogue dataset, M3ED.
M3ED contains 990 dyadic emotional dialogues from 56 different TV series, a total of 9,082 turns and 24,449 utterances.
To the best of our knowledge, M3ED is the first multimodal emotional dialogue dataset in Chinese.
arXiv Detail & Related papers (2022-05-09T06:52:51Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Multi-Task Learning and Adapted Knowledge Models for Emotion-Cause
Extraction [18.68808042388714]
We present solutions that tackle both emotion recognition and emotion cause detection in a joint fashion.
Considering that common-sense knowledge plays an important role in understanding implicitly expressed emotions, we propose novel methods.
We show performance improvement on both tasks when including common-sense reasoning and a multitask framework.
arXiv Detail & Related papers (2021-06-17T20:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.