Fearful Falcons and Angry Llamas: Emotion Category Annotations of Arguments by Humans and LLMs
- URL: http://arxiv.org/abs/2412.15993v2
- Date: Tue, 22 Apr 2025 10:20:16 GMT
- Title: Fearful Falcons and Angry Llamas: Emotion Category Annotations of Arguments by Humans and LLMs
- Authors: Lynn Greschner, Roman Klinger,
- Abstract summary: We crowdsource subjective annotations of emotion categories in a German argument corpus and evaluate automatic labeling methods.<n>We find that emotion categories enhance the prediction of emotionality in arguments.<n>Across all prompt settings and models, automatic predictions show a high recall but low precision for predicting anger and fear.
- Score: 9.088303226909277
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Arguments evoke emotions, influencing the effect of the argument itself. Not only the emotional intensity but also the category influence the argument's effects, for instance, the willingness to adapt stances. While binary emotionality has been studied in arguments, there is no work on discrete emotion categories (e.g., "Anger") in such data. To fill this gap, we crowdsource subjective annotations of emotion categories in a German argument corpus and evaluate automatic LLM-based labeling methods. Specifically, we compare three prompting strategies (zero-shot, one-shot, chain-of-thought) on three large instruction-tuned language models (Falcon-7b-instruct, Llama-3.1-8B-instruct, GPT-4o-mini). We further vary the definition of the output space to be binary (is there emotionality in the argument?), closed-domain (which emotion from a given label set is in the argument?), or open-domain (which emotion is in the argument?). We find that emotion categories enhance the prediction of emotionality in arguments, emphasizing the need for discrete emotion annotations in arguments. Across all prompt settings and models, automatic predictions show a high recall but low precision for predicting anger and fear, indicating a strong bias toward negative emotions.
Related papers
- Do LLMs "Feel"? Emotion Circuits Discovery and Control [54.57583855608979]
We study the internal mechanisms that give rise to emotional expression and in controlling emotions in generated text.<n>This is the first systematic study to uncover and validate emotion circuits in large language models.
arXiv Detail & Related papers (2025-10-13T12:24:24Z) - Emotionally Charged, Logically Blurred: AI-driven Emotional Framing Impairs Human Fallacy Detection [25.196971926947906]
We present the first computational study of how emotional framing interacts with fallacies and convincingness.<n>We use large language models (LLMs) to systematically change emotional appeals in fallacious arguments.<n>Our work has implications for AI-driven emotional manipulation in the context of fallacious argumentation.
arXiv Detail & Related papers (2025-10-09T14:57:37Z) - UDDETTS: Unifying Discrete and Dimensional Emotions for Controllable Emotional Text-to-Speech [61.989360995528905]
We propose UDDETTS, a universal framework unifying discrete and dimensional emotions for controllable emotional TTS.<n>This model introduces the interpretable Arousal-Dominance-Valence (ADV) space for dimensional emotion description and supports emotion control driven by either discrete emotion labels or nonlinearly quantified ADV values.<n>Experiments show that UDDETTS achieves linear emotion control along three interpretable dimensions, and exhibits superior end-to-end emotional speech synthesis capabilities.
arXiv Detail & Related papers (2025-05-15T12:57:19Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Where are We in Event-centric Emotion Analysis? Bridging Emotion Role
Labeling and Appraisal-based Approaches [10.736626320566707]
The term emotion analysis in text subsumes various natural language processing tasks.
We argue that emotions and events are related in two ways.
We discuss how to incorporate psychological appraisal theories in NLP models to interpret events.
arXiv Detail & Related papers (2023-09-05T09:56:29Z) - Emotion Flip Reasoning in Multiparty Conversations [27.884015521888458]
Instigator based Emotion Flip Reasoning (EFR) aims to identify the instigator behind a speaker's emotion flip within a conversation.
We present MELD-I, a dataset that includes ground-truth EFR instigator labels, which are in line with emotional psychology.
We propose a novel neural architecture called TGIF, which leverages Transformer encoders and stacked GRUs to capture the dialogue context.
arXiv Detail & Related papers (2023-06-24T13:22:02Z) - Speech Emotion Diarization: Which Emotion Appears When? [11.84193589275529]
We propose Speech Emotion Diarization (SED) to reflect the fine-grained nature of speech emotions.
Just as Speaker Diarization answers the question of "Who speaks when?", Speech Emotion Diarization answers the question of "Which emotion appears when?"
arXiv Detail & Related papers (2023-06-22T15:47:36Z) - Experiencer-Specific Emotion and Appraisal Prediction [13.324006587838523]
Emotion classification in NLP assigns emotions to texts, such as sentences or paragraphs.
We focus on the experiencers of events, and assign an emotion (if any holds) to each of them.
Our experiencer-aware models of emotions and appraisals outperform the experiencer-agnostic baselines.
arXiv Detail & Related papers (2022-10-21T16:04:27Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z) - Perspective-taking and Pragmatics for Generating Empathetic Responses
Focused on Emotion Causes [50.569762345799354]
We argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion from his or her utterance and (ii) reflecting those specific words in the response generation.
Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label.
arXiv Detail & Related papers (2021-09-18T04:22:49Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.