Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis
- URL: http://arxiv.org/abs/2203.12254v1
- Date: Wed, 23 Mar 2022 08:04:30 GMT
- Title: Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis
- Authors: Yequan Wang, Xuying Meng, Yiyi Liu, Aixin Sun, Yao Wang, Yinhe Zheng,
Minlie Huang
- Abstract summary: We propose a Context-based Hierarchical Attention Capsule(Chat-Capsule) model, which models both utterance-level and dialog-level emotions and their interrelations.
On a dialog dataset collected from customer support of an e-commerce platform, our model is also able to predict user satisfaction and emotion curve category.
- Score: 70.98130990040228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many studies on dialog emotion analysis focus on utterance-level emotion
only. These models hence are not optimized for dialog-level emotion detection,
i.e. to predict the emotion category of a dialog as a whole. More importantly,
these models cannot benefit from the context provided by the whole dialog. In
real-world applications, annotations to dialog could fine-grained, including
both utterance-level tags (e.g. speaker type, intent category, and emotion
category), and dialog-level tags (e.g. user satisfaction, and emotion curve
category). In this paper, we propose a Context-based Hierarchical Attention
Capsule~(Chat-Capsule) model, which models both utterance-level and
dialog-level emotions and their interrelations. On a dialog dataset collected
from customer support of an e-commerce platform, our model is also able to
predict user satisfaction and emotion curve category. Emotion curve refers to
the change of emotions along the development of a conversation. Experiments
show that the proposed Chat-Capsule outperform state-of-the-art baselines on
both benchmark dataset and proprietary dataset. Source code will be released
upon acceptance.
Related papers
- Towards Empathetic Conversational Recommender Systems [77.53167131692]
We propose an empathetic conversational recommender (ECR) framework.
ECR contains two main modules: emotion-aware item recommendation and emotion-aligned response generation.
Our experiments on the ReDial dataset validate the efficacy of our framework in enhancing recommendation accuracy and improving user satisfaction.
arXiv Detail & Related papers (2024-08-30T15:43:07Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - EmoTwiCS: A Corpus for Modelling Emotion Trajectories in Dutch Customer
Service Dialogues on Twitter [9.2878798098526]
This paper introduces EmoTwiCS, a corpus of 9,489 Dutch customer service dialogues on Twitter that are annotated for emotion trajectories.
The term emotion trajectory' refers not only to the fine-grained emotions experienced by customers, but also to the event happening prior to the conversation and the responses made by the human operator.
arXiv Detail & Related papers (2023-10-10T11:31:11Z) - Multi-turn Dialogue Comprehension from a Topic-aware Perspective [70.37126956655985]
This paper proposes to model multi-turn dialogues from a topic-aware perspective.
We use a dialogue segmentation algorithm to split a dialogue passage into topic-concentrated fragments in an unsupervised way.
We also present a novel model, Topic-Aware Dual-Attention Matching (TADAM) Network, which takes topic segments as processing elements.
arXiv Detail & Related papers (2023-09-18T11:03:55Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning Based on Visually Grounded Conversations [31.707018753687098]
We introduce Affective Visual Dialog as a testbed for research on understanding the formation of emotions in visually grounded conversations.
The task involves three skills: Dialog-based Question Answering, Dialog-based Emotion Prediction and Affective emotion explanation generation.
Our key contribution is the collection of a large-scale dataset, dubbed AffectVisDial, consisting of 50K 10-turn visually grounded dialogs.
arXiv Detail & Related papers (2023-08-30T22:50:32Z) - Think Twice: A Human-like Two-stage Conversational Agent for Emotional Response Generation [16.659457455269127]
We propose a two-stage conversational agent for the generation of emotional dialogue.
First, a dialogue model trained without the emotion-annotated dialogue corpus generates a prototype response that meets the contextual semantics.
Secondly, the first-stage prototype is modified by a controllable emotion refiner with the empathy hypothesis.
arXiv Detail & Related papers (2023-01-12T10:03:56Z) - Generating Empathetic Responses with a Large Scale Dialog Dataset [0.76146285961466]
Existing models either directly incorporate pre-defined emotion information to guide the response generation, or use deterministic rules to decide the response emotion.
We show how to build a multi-turn empathetic dialog model that performs well compared to its baselines over 6,000 human evaluated instances.
arXiv Detail & Related papers (2021-05-14T13:45:40Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.