The ABBE Corpus: Animate Beings Being Emotional
- URL: http://arxiv.org/abs/2201.10618v1
- Date: Tue, 25 Jan 2022 20:35:52 GMT
- Title: The ABBE Corpus: Animate Beings Being Emotional
- Authors: Samira Zad, Joshuan Jimenez, Mark A. Finlayson
- Abstract summary: We provide the ABBE corpus -- Animate Beings Being Emotional -- a new double-annotated corpus of texts.
The corpus contains 30 chapters, comprising 134,513 words, drawn from the Corpus of English Novels.
It contains 2,010 unique emotion expressions attributable to 2,227 animate beings.
- Score: 14.50261153230204
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Emotion detection is an established NLP task of demonstrated utility for text
understanding. However, basic emotion detection leaves out key information,
namely, who is experiencing the emotion in question. For example, it may be the
author, the narrator, or a character; or the emotion may correspond to
something the audience is supposed to feel, or even be unattributable to a
specific being, e.g., when emotions are being discussed per se. We provide the
ABBE corpus -- Animate Beings Being Emotional -- a new double-annotated corpus
of texts that captures this key information for one class of emotion
experiencer, namely, animate beings in the world described by the text. Such a
corpus is useful for developing systems that seek to model or understand this
specific type of expressed emotion. Our corpus contains 30 chapters, comprising
134,513 words, drawn from the Corpus of English Novels, and contains 2,010
unique emotion expressions attributable to 2,227 animate beings. The emotion
expressions are categorized according to Plutchik's 8-category emotion model,
and the overall inter-annotator agreement for the annotations was 0.83 Cohen's
Kappa, indicating excellent agreement. We describe in detail our annotation
scheme and procedure, and also release the corpus for use by other researchers.
Related papers
- Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Where are We in Event-centric Emotion Analysis? Bridging Emotion Role
Labeling and Appraisal-based Approaches [10.736626320566707]
The term emotion analysis in text subsumes various natural language processing tasks.
We argue that emotions and events are related in two ways.
We discuss how to incorporate psychological appraisal theories in NLP models to interpret events.
arXiv Detail & Related papers (2023-09-05T09:56:29Z) - Experiencer-Specific Emotion and Appraisal Prediction [13.324006587838523]
Emotion classification in NLP assigns emotions to texts, such as sentences or paragraphs.
We focus on the experiencers of events, and assign an emotion (if any holds) to each of them.
Our experiencer-aware models of emotions and appraisals outperform the experiencer-agnostic baselines.
arXiv Detail & Related papers (2022-10-21T16:04:27Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - x-enVENT: A Corpus of Event Descriptions with Experiencer-specific
Emotion and Appraisal Annotations [13.324006587838523]
We argue that a classification setup for emotion analysis should be performed in an integrated manner, including the different semantic roles that participate in an emotion episode.
Based on appraisal theories in psychology, we compile an English corpus of written event descriptions.
The descriptions depict emotion-eliciting circumstances, and they contain mentions of people who responded emotionally.
arXiv Detail & Related papers (2022-03-21T12:02:06Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z) - Emotion Recognition under Consideration of the Emotion Component Process
Model [9.595357496779394]
We use the emotion component process model (CPM) by Scherer (2005) to explain emotion communication.
CPM states that emotions are a coordinated process of various subcomponents, in reaction to an event, namely the subjective feeling, the cognitive appraisal, the expression, a physiological bodily reaction, and a motivational action tendency.
We find that emotions on Twitter are predominantly expressed by event descriptions or subjective reports of the feeling, while in literature, authors prefer to describe what characters do, and leave the interpretation to the reader.
arXiv Detail & Related papers (2021-07-27T15:53:25Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic
Emotions in German and English Poetry [26.172030802168752]
We consider emotions in poetry as they are elicited in the reader, rather than what is expressed in the text or intended by the author.
We conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within their context.
arXiv Detail & Related papers (2020-03-17T13:54:48Z) - Annotation of Emotion Carriers in Personal Narratives [69.07034604580214]
We are interested in the problem of understanding personal narratives (PN) - spoken or written - recollections of facts, events, and thoughts.
In PN, emotion carriers are the speech or text segments that best explain the emotional state of the user.
This work proposes and evaluates an annotation model for identifying emotion carriers in spoken personal narratives.
arXiv Detail & Related papers (2020-02-27T15:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.