Multi-Task Learning Framework for Extracting Emotion Cause Span and
Entailment in Conversations
- URL: http://arxiv.org/abs/2211.03742v1
- Date: Mon, 7 Nov 2022 18:14:45 GMT
- Title: Multi-Task Learning Framework for Extracting Emotion Cause Span and
Entailment in Conversations
- Authors: Ashwani Bhat and Ashutosh Modi
- Abstract summary: We propose neural models to extract emotion cause span and entailment in conversations.
MuTEC is an end-to-end Multi-Task learning framework for extracting emotions, emotion cause, and entailment in conversations.
- Score: 3.2260643152341095
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Predicting emotions expressed in text is a well-studied problem in the NLP
community. Recently there has been active research in extracting the cause of
an emotion expressed in text. Most of the previous work has done causal emotion
entailment in documents. In this work, we propose neural models to extract
emotion cause span and entailment in conversations. For learning such models,
we use RECCON dataset, which is annotated with cause spans at the utterance
level. In particular, we propose MuTEC, an end-to-end Multi-Task learning
framework for extracting emotions, emotion cause, and entailment in
conversations. This is in contrast to existing baseline models that use ground
truth emotions to extract the cause. MuTEC performs better than the baselines
for most of the data folds provided in the dataset.
Related papers
- Generative Emotion Cause Explanation in Multimodal Conversations [23.39751445330256]
We propose a new task, textbfMultimodal textbfConversation textbfEmotion textbfCause textbfExplanation (MCECE)
It aims to generate a detailed explanation of the emotional cause to the target utterance within a multimodal conversation scenario.
A novel approach, FAME-Net, is proposed, that harnesses the power of Large Language Models (LLMs) to analyze visual data and accurately interpret the emotions conveyed through facial expressions in videos.
arXiv Detail & Related papers (2024-11-01T09:16:30Z) - LastResort at SemEval-2024 Task 3: Exploring Multimodal Emotion Cause Pair Extraction as Sequence Labelling Task [3.489826905722736]
SemEval 2024 introduces the task of Multimodal Emotion Cause Analysis in Conversations.
This paper proposes models that tackle this task as an utterance labeling and a sequence labeling problem.
In the official leaderboard for the task, our architecture was ranked 8th, achieving an F1-score of 0.1759 on the leaderboard.
arXiv Detail & Related papers (2024-04-02T16:32:49Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - ECQED: Emotion-Cause Quadruple Extraction in Dialogs [37.66816413841564]
We present Emotion-Cause Quadruple Extraction in Dialogs (ECQED), which requires detecting emotion-cause utterance pairs and emotion and cause types.
We show that introducing the fine-grained emotion and cause features evidently helps better dialog generation.
arXiv Detail & Related papers (2023-06-06T19:04:30Z) - Unsupervised Extractive Summarization of Emotion Triggers [56.50078267340738]
We develop new unsupervised learning models that can jointly detect emotions and summarize their triggers.
Our best approach, entitled Emotion-Aware Pagerank, incorporates emotion information from external sources combined with a language understanding module.
arXiv Detail & Related papers (2023-06-02T11:07:13Z) - Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis [70.98130990040228]
We propose a Context-based Hierarchical Attention Capsule(Chat-Capsule) model, which models both utterance-level and dialog-level emotions and their interrelations.
On a dialog dataset collected from customer support of an e-commerce platform, our model is also able to predict user satisfaction and emotion curve category.
arXiv Detail & Related papers (2022-03-23T08:04:30Z) - Multimodal Emotion-Cause Pair Extraction in Conversations [23.95461291718006]
We introduce a new task named Multimodal Emotion-Cause Pair Extraction in Conversations.
We aim to jointly extract emotions and their associated causes from conversations reflected in texts, audio and video.
Preliminary experimental results demonstrate the potential of multimodal information fusion for discovering both emotions and causes in conversations.
arXiv Detail & Related papers (2021-10-15T11:30:24Z) - EMOVIE: A Mandarin Emotion Speech Dataset with a Simple Emotional
Text-to-Speech Model [56.75775793011719]
We introduce and publicly release a Mandarin emotion speech dataset including 9,724 samples with audio files and its emotion human-labeled annotation.
Unlike those models which need additional reference audio as input, our model could predict emotion labels just from the input text and generate more expressive speech conditioned on the emotion embedding.
In the experiment phase, we first validate the effectiveness of our dataset by an emotion classification task. Then we train our model on the proposed dataset and conduct a series of subjective evaluations.
arXiv Detail & Related papers (2021-06-17T08:34:21Z) - Recognizing Emotion Cause in Conversations [82.88647116730691]
Recognizing the cause behind emotions in text is a fundamental yet under-explored area of research in NLP.
We introduce the task of recognizing emotion cause in conversations with an accompanying dataset named RECCON.
arXiv Detail & Related papers (2020-12-22T03:51:35Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - End-to-end Emotion-Cause Pair Extraction via Learning to Link [18.741585103275334]
Emotion-cause pair extraction (ECPE) aims at jointly investigating emotions and their underlying causes in documents.
Existing approaches to ECPE generally adopt a two-stage method, i.e., (1) emotion and cause detection, and then (2) pairing the detected emotions and causes.
We propose a multi-task learning model that can extract emotions, causes and emotion-cause pairs simultaneously in an end-to-end manner.
arXiv Detail & Related papers (2020-02-25T07:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.