Do Emotions Really Affect Argument Convincingness? A Dynamic Approach with LLM-based Manipulation Checks
- URL: http://arxiv.org/abs/2503.00024v2
- Date: Mon, 21 Jul 2025 11:43:19 GMT
- Title: Do Emotions Really Affect Argument Convincingness? A Dynamic Approach with LLM-based Manipulation Checks
- Authors: Yanran Chen, Steffen Eger,
- Abstract summary: We introduce a dynamic framework inspired by manipulation checks commonly used in psychology and social science.<n>This framework examines the extent to which perceived emotional intensity influences perceived convincingness.<n>We find that in over half of cases, human judgments of convincingness remain unchanged despite variations in perceived emotional intensity.
- Score: 22.464222858889084
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emotions have been shown to play a role in argument convincingness, yet this aspect is underexplored in the natural language processing (NLP) community. Unlike prior studies that use static analyses, focus on a single text domain or language, or treat emotion as just one of many factors, we introduce a dynamic framework inspired by manipulation checks commonly used in psychology and social science; leveraging LLM-based manipulation checks, this framework examines the extent to which perceived emotional intensity influences perceived convincingness. Through human evaluation of arguments across different languages, text domains, and topics, we find that in over half of cases, human judgments of convincingness remain unchanged despite variations in perceived emotional intensity; when emotions do have an impact, they more often enhance rather than weaken convincingness. We further analyze whether 11 LLMs behave like humans in the same scenario, finding that while LLMs generally mirror human patterns, they struggle to capture nuanced emotional effects in individual judgments.
Related papers
- AI shares emotion with humans across languages and cultures [12.530921452568291]
We assess human-AI emotional alignment across linguistic-cultural groups and model-families.<n>Our analyses reveal that LLM-derived emotion spaces are structurally congruent with human perception.<n>We show that model expressions can be stably and naturally modulated across distinct emotion categories.
arXiv Detail & Related papers (2025-06-11T14:42:30Z) - Beyond Context to Cognitive Appraisal: Emotion Reasoning as a Theory of Mind Benchmark for Large Language Models [11.255011967393838]
This study advances beyond surface-level perceptual features to investigate how large language models (LLMs) reason about others' emotional states using contextual information.<n>Grounded in Cognitive Appraisal Theory, we curate a specialized ToM evaluation dataset1 to assess both forward reasoning - from context to emotion- and backward reasoning - from emotion to inferred context.
arXiv Detail & Related papers (2025-05-31T01:18:04Z) - Don't Get Too Excited -- Eliciting Emotions in LLMs [1.8399318639816038]
This paper investigates the challenges of affect control in large language models (LLMs)<n>We evaluate state-of-the-art open-weight LLMs to assess their affective expressive range.<n>We quantify the models' capacity to express a wide spectrum of emotions and how they fluctuate during interactions.
arXiv Detail & Related papers (2025-03-04T10:06:41Z) - How Deep is Love in LLMs' Hearts? Exploring Semantic Size in Human-like Cognition [75.11808682808065]
This study investigates whether large language models (LLMs) exhibit similar tendencies in understanding semantic size.
Our findings reveal that multi-modal training is crucial for LLMs to achieve more human-like understanding.
Lastly, we examine whether LLMs are influenced by attention-grabbing headlines with larger semantic sizes in a real-world web shopping scenario.
arXiv Detail & Related papers (2025-03-01T03:35:56Z) - From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations [19.67703146838264]
Large Language Models (LLMs) have revolutionized the generation of emotional support conversations.
This paper explores the role of personas in the creation of emotional support conversations.
arXiv Detail & Related papers (2025-02-17T05:24:30Z) - Consistency of Responses and Continuations Generated by Large Language Models on Social Media [9.809922019554461]
Large Language Models (LLMs) demonstrate remarkable capabilities in text generation, yet their emotional consistency and semantic coherence in social media contexts remain insufficiently understood.
This study investigates how LLMs handle emotional content and maintain semantic relationships through continuation and response tasks using two open-source models: Gemma and Llama.
arXiv Detail & Related papers (2025-01-14T13:19:47Z) - Do Large Language Models Possess Sensitive to Sentiment? [18.88126980975737]
Large Language Models (LLMs) have recently displayed their extraordinary capabilities in language understanding.<n>This paper investigates the ability of LLMs to detect and react to sentiment in text modal.
arXiv Detail & Related papers (2024-09-04T01:40:20Z) - Large Language Models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments [0.0]
Large Language Models (LLMs) are already as persuasive as humans.
This paper investigates the persuasion strategies of LLMs, comparing them with human-generated arguments.
arXiv Detail & Related papers (2024-04-14T19:01:20Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench [83.41621219298489]
We evaluate Large Language Models' (LLMs) anthropomorphic capabilities using the emotion appraisal theory from psychology.
We collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study.
We conduct a human evaluation involving more than 1,200 subjects worldwide.
arXiv Detail & Related papers (2023-08-07T15:18:30Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.