Automatic Emotion Modelling in Written Stories
- URL: http://arxiv.org/abs/2212.11382v1
- Date: Wed, 21 Dec 2022 21:46:01 GMT
- Title: Automatic Emotion Modelling in Written Stories
- Authors: Lukas Christ, Shahin Amiriparian, Manuel Milling, Ilhan Aslan, Bj\"orn
W. Schuller
- Abstract summary: We propose a set of novel Transformer-based methods for predicting emotional signals over the course of written stories.
We explore several strategies for fine-tuning a pretrained ELECTRA model and study the benefits of considering a sentence's context.
Our code and additional annotations are made available at https://github.com/lc0197/emotion_modelling_stories.
- Score: 4.484753247472559
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Telling stories is an integral part of human communication which can evoke
emotions and influence the affective states of the audience. Automatically
modelling emotional trajectories in stories has thus attracted considerable
scholarly interest. However, as most existing works have been limited to
unsupervised dictionary-based approaches, there is no labelled benchmark for
this task. We address this gap by introducing continuous valence and arousal
annotations for an existing dataset of children's stories annotated with
discrete emotion categories. We collect additional annotations for this data
and map the originally categorical labels to the valence and arousal space.
Leveraging recent advances in Natural Language Processing, we propose a set of
novel Transformer-based methods for predicting valence and arousal signals over
the course of written stories. We explore several strategies for fine-tuning a
pretrained ELECTRA model and study the benefits of considering a sentence's
context when inferring its emotionality. Moreover, we experiment with
additional LSTM and Transformer layers. The best configuration achieves a
Concordance Correlation Coefficient (CCC) of .7338 for valence and .6302 for
arousal on the test set, demonstrating the suitability of our proposed
approach. Our code and additional annotations are made available at
https://github.com/lc0197/emotion_modelling_stories.
Related papers
- Modeling Emotional Trajectories in Written Stories Utilizing Transformers and Weakly-Supervised Learning [47.02027575768659]
We introduce continuous valence and arousal labels for an existing dataset of children's stories originally annotated with discrete emotion categories.
For predicting the thus obtained emotionality signals, we fine-tune a DeBERTa model and improve upon this baseline via a weakly supervised learning approach.
A detailed analysis shows the extent to which the results vary depending on factors such as the author, the individual story, or the section within the story.
arXiv Detail & Related papers (2024-06-04T12:17:16Z) - Two in One Go: Single-stage Emotion Recognition with Decoupled Subject-context Transformer [78.35816158511523]
We present a single-stage emotion recognition approach, employing a Decoupled Subject-Context Transformer (DSCT) for simultaneous subject localization and emotion classification.
We evaluate our single-stage framework on two widely used context-aware emotion recognition datasets, CAER-S and EMOTIC.
arXiv Detail & Related papers (2024-04-26T07:30:32Z) - Improved Text Emotion Prediction Using Combined Valence and Arousal Ordinal Classification [37.823815777259036]
We introduce a method for categorizing emotions from text, which acknowledges and differentiates between the diversified similarities and distinctions of various emotions.
Our approach not only preserves high accuracy in emotion prediction but also significantly reduces the magnitude of errors in cases of misclassification.
arXiv Detail & Related papers (2024-04-02T10:06:30Z) - Context Unlocks Emotions: Text-based Emotion Classification Dataset
Auditing with Large Language Models [23.670143829183104]
The lack of contextual information in text data can make the annotation process of text-based emotion classification datasets challenging.
We propose a formal definition of textual context to motivate a prompting strategy to enhance such contextual information.
Our method improves alignment between inputs and their human-annotated labels from both an empirical and human-evaluated standpoint.
arXiv Detail & Related papers (2023-11-06T21:34:49Z) - DeltaScore: Fine-Grained Story Evaluation with Perturbations [69.33536214124878]
We introduce DELTASCORE, a novel methodology that employs perturbation techniques for the evaluation of nuanced story aspects.
Our central proposition posits that the extent to which a story excels in a specific aspect (e.g., fluency) correlates with the magnitude of its susceptibility to particular perturbations.
We measure the quality of an aspect by calculating the likelihood difference between pre- and post-perturbation states using pre-trained language models.
arXiv Detail & Related papers (2023-03-15T23:45:54Z) - REDAffectiveLM: Leveraging Affect Enriched Embedding and
Transformer-based Neural Language Model for Readers' Emotion Detection [3.6678641723285446]
We propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM.
We leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention.
arXiv Detail & Related papers (2023-01-21T19:28:25Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling [81.33107307509718]
We propose a topic adaptive storyteller to model the ability of inter-topic generalization.
We also propose a prototype encoding structure to model the ability of intra-topic derivation.
Experimental results show that topic adaptation and prototype encoding structure mutually bring benefit to the few-shot model.
arXiv Detail & Related papers (2020-08-11T03:55:11Z) - GoEmotions: A Dataset of Fine-Grained Emotions [16.05879383442812]
We introduce GoEmotions, the largest manually annotated dataset of 58k English Reddit comments, labeled for 27 emotion categories or Neutral.
Our BERT-based model achieves an average F1-score of.46 across our proposed taxonomy, leaving much room for improvement.
arXiv Detail & Related papers (2020-05-01T18:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.