COSMIC: COmmonSense knowledge for eMotion Identification in
Conversations
- URL: http://arxiv.org/abs/2010.02795v1
- Date: Tue, 6 Oct 2020 15:09:38 GMT
- Title: COSMIC: COmmonSense knowledge for eMotion Identification in
Conversations
- Authors: Deepanway Ghosal, Navonil Majumder, Alexander Gelbukh, Rada Mihalcea,
Soujanya Poria
- Abstract summary: We propose COSMIC, a new framework that incorporates different elements of commonsense such as mental states, events, and causal relations.
We show that COSMIC achieves new state-of-the-art results for emotion recognition on four different benchmark conversational datasets.
- Score: 95.71018134363976
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper, we address the task of utterance level emotion recognition in
conversations using commonsense knowledge. We propose COSMIC, a new framework
that incorporates different elements of commonsense such as mental states,
events, and causal relations, and build upon them to learn interactions between
interlocutors participating in a conversation. Current state-of-the-art methods
often encounter difficulties in context propagation, emotion shift detection,
and differentiating between related emotion classes. By learning distinct
commonsense representations, COSMIC addresses these challenges and achieves new
state-of-the-art results for emotion recognition on four different benchmark
conversational datasets. Our code is available at
https://github.com/declare-lab/conv-emotion.
Related papers
- Emotion and Intent Joint Understanding in Multimodal Conversation: A Benchmarking Dataset [74.74686464187474]
Emotion and Intent Joint Understanding in Multimodal Conversation (MC-EIU) aims to decode the semantic information manifested in a multimodal conversational history.
MC-EIU is enabling technology for many human-computer interfaces.
We propose an MC-EIU dataset, which features 7 emotion categories, 9 intent categories, 3 modalities, i.e., textual, acoustic, and visual content, and two languages, English and Mandarin.
arXiv Detail & Related papers (2024-07-03T01:56:00Z) - Two in One Go: Single-stage Emotion Recognition with Decoupled Subject-context Transformer [78.35816158511523]
We present a single-stage emotion recognition approach, employing a Decoupled Subject-Context Transformer (DSCT) for simultaneous subject localization and emotion classification.
We evaluate our single-stage framework on two widely used context-aware emotion recognition datasets, CAER-S and EMOTIC.
arXiv Detail & Related papers (2024-04-26T07:30:32Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - Mimicking the Thinking Process for Emotion Recognition in Conversation
with Prompts and Paraphrasing [26.043447749659478]
We propose a novel framework which mimics the thinking process when modeling complex factors.
We first comprehend the conversational context with a history-oriented prompt to selectively gather information from predecessors of the target utterance.
We then model the speaker's background with an experience-oriented prompt to retrieve the similar utterances from all conversations.
arXiv Detail & Related papers (2023-06-11T06:36:19Z) - deep learning of segment-level feature representation for speech emotion
recognition in conversations [9.432208348863336]
We propose a conversational speech emotion recognition method to deal with capturing attentive contextual dependency and speaker-sensitive interactions.
First, we use a pretrained VGGish model to extract segment-based audio representation in individual utterances.
Second, an attentive bi-directional recurrent unit (GRU) models contextual-sensitive information and explores intra- and inter-speaker dependencies jointly.
arXiv Detail & Related papers (2023-02-05T16:15:46Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Topic-Driven and Knowledge-Aware Transformer for Dialogue Emotion
Detection [24.67719513300731]
We propose a Topic-Driven Knowledge-Aware Transformer to handle the challenges above.
We firstly design a topic-augmented language model (LM) with an additional layer specialized for topic detection.
The transformer-based encoder-decoder architecture fuses the topical and commonsense information, and performs the emotion label sequence prediction.
arXiv Detail & Related papers (2021-06-02T10:57:44Z) - AdCOFE: Advanced Contextual Feature Extraction in Conversations for
emotion classification [0.29360071145551075]
The proposed model of Advanced Contextual Feature Extraction (AdCOFE) addresses these issues.
Experiments on the Emotion recognition in conversations dataset show that AdCOFE is beneficial in capturing emotions in conversations.
arXiv Detail & Related papers (2021-04-09T17:58:19Z) - Discovering Emotion and Reasoning its Flip in Multi-Party Conversations
using Masked Memory Network and Transformer [16.224961520924115]
We introduce a novel task -- Emotion Flip Reasoning (EFR)
EFR aims to identify past utterances which have triggered one's emotion state to flip at a certain time.
We propose a masked memory network to address the former and a Transformer-based network for the latter task.
arXiv Detail & Related papers (2021-03-23T07:42:09Z) - Knowledge Bridging for Empathetic Dialogue Generation [52.39868458154947]
Lack of external knowledge makes empathetic dialogue systems difficult to perceive implicit emotions and learn emotional interactions from limited dialogue history.
We propose to leverage external knowledge, including commonsense knowledge and emotional lexical knowledge, to explicitly understand and express emotions in empathetic dialogue generation.
arXiv Detail & Related papers (2020-09-21T09:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.