Fine-grained Emotion and Intent Learning in Movie Dialogues
- URL: http://arxiv.org/abs/2012.13624v1
- Date: Fri, 25 Dec 2020 20:29:56 GMT
- Title: Fine-grained Emotion and Intent Learning in Movie Dialogues
- Authors: Anuradha Welivita, Yubo Xie, Pearl Pu
- Abstract summary: We propose a novel large-scale emotional dialogue dataset, consisting of 1M dialogues retrieved from the OpenSubtitles corpus.
This work explains the complex pipeline used to preprocess movie subtitles and select good movie dialogues to annotate.
This scale of emotional dialogue classification has never been attempted before, both in terms of dataset size and fine-grained emotion and intent categories.
- Score: 1.2891210250935146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel large-scale emotional dialogue dataset, consisting of 1M
dialogues retrieved from the OpenSubtitles corpus and annotated with 32
emotions and 9 empathetic response intents using a BERT-based fine-grained
dialogue emotion classifier. This work explains the complex pipeline used to
preprocess movie subtitles and select good movie dialogues to annotate. We also
describe the semi-supervised learning process followed to train a fine-grained
emotion classifier to annotate these dialogues. Despite the large set of
labels, our dialogue emotion classifier achieved an accuracy of $65\%$ and was
used to annotate 1M emotional movie dialogues from OpenSubtitles. This scale of
emotional dialogue classification has never been attempted before, both in
terms of dataset size and fine-grained emotion and intent categories.
Visualization techniques used to analyze the quality of the resultant dataset
suggest that it conforms to the patterns of human social interaction.
Related papers
- Personality-affected Emotion Generation in Dialog Systems [67.40609683389947]
We propose a new task, Personality-affected Emotion Generation, to generate emotion based on the personality given to the dialog system.
We analyze the challenges in this task, i.e., (1) heterogeneously integrating personality and emotional factors and (2) extracting multi-granularity emotional information in the dialog context.
Results suggest that by adopting our method, the emotion generation performance is improved by 13% in macro-F1 and 5% in weighted-F1 from the BERT-base model.
arXiv Detail & Related papers (2024-04-03T08:48:50Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - How you feelin'? Learning Emotions and Mental States in Movie Scenes [9.368590075496149]
We formulate emotion understanding as predicting a diverse and multi-label set of emotions at the level of a movie scene.
EmoTx is a multimodal Transformer-based architecture that ingests videos, multiple characters, and dialog utterances to make joint predictions.
arXiv Detail & Related papers (2023-04-12T06:31:14Z) - Think Twice: A Human-like Two-stage Conversational Agent for Emotional Response Generation [16.659457455269127]
We propose a two-stage conversational agent for the generation of emotional dialogue.
First, a dialogue model trained without the emotion-annotated dialogue corpus generates a prototype response that meets the contextual semantics.
Secondly, the first-stage prototype is modified by a controllable emotion refiner with the empathy hypothesis.
arXiv Detail & Related papers (2023-01-12T10:03:56Z) - A Benchmark for Understanding and Generating Dialogue between Characters
in Stories [75.29466820496913]
We present the first study to explore whether machines can understand and generate dialogue in stories.
We propose two new tasks including Masked Dialogue Generation and Dialogue Speaker Recognition.
We show the difficulty of the proposed tasks by testing existing models with automatic and manual evaluation on DialStory.
arXiv Detail & Related papers (2022-09-18T10:19:04Z) - M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database [139.08528216461502]
We propose a Multi-modal Multi-scene Multi-label Emotional Dialogue dataset, M3ED.
M3ED contains 990 dyadic emotional dialogues from 56 different TV series, a total of 9,082 turns and 24,449 utterances.
To the best of our knowledge, M3ED is the first multimodal emotional dialogue dataset in Chinese.
arXiv Detail & Related papers (2022-05-09T06:52:51Z) - Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis [70.98130990040228]
We propose a Context-based Hierarchical Attention Capsule(Chat-Capsule) model, which models both utterance-level and dialog-level emotions and their interrelations.
On a dialog dataset collected from customer support of an e-commerce platform, our model is also able to predict user satisfaction and emotion curve category.
arXiv Detail & Related papers (2022-03-23T08:04:30Z) - Simulated Annealing for Emotional Dialogue Systems [22.96717845092991]
We consider the task of expressing a specific emotion for dialogue generation.
Our proposed method shows 12% improvements in emotion accuracy compared with the previous state-of-the-art method.
arXiv Detail & Related papers (2021-09-22T13:17:17Z) - Perspective-taking and Pragmatics for Generating Empathetic Responses
Focused on Emotion Causes [50.569762345799354]
We argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion from his or her utterance and (ii) reflecting those specific words in the response generation.
Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label.
arXiv Detail & Related papers (2021-09-18T04:22:49Z) - EmoWOZ: A Large-Scale Corpus and Labelling Scheme for Emotion in
Task-Oriented Dialogue Systems [3.3010169113961325]
EmoWOZ is a large-scale manually emotion-annotated corpus of task-oriented dialogues.
It contains more than 11K dialogues with more than 83K emotion annotations of user utterances.
We propose a novel emotion labelling scheme, which is tailored to task-oriented dialogues.
arXiv Detail & Related papers (2021-09-10T15:00:01Z) - Generating Empathetic Responses with a Large Scale Dialog Dataset [0.76146285961466]
Existing models either directly incorporate pre-defined emotion information to guide the response generation, or use deterministic rules to decide the response emotion.
We show how to build a multi-turn empathetic dialog model that performs well compared to its baselines over 6,000 human evaluated instances.
arXiv Detail & Related papers (2021-05-14T13:45:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.