Emotionally Charged, Logically Blurred: AI-driven Emotional Framing Impairs Human Fallacy Detection
- URL: http://arxiv.org/abs/2510.09695v1
- Date: Thu, 09 Oct 2025 14:57:37 GMT
- Title: Emotionally Charged, Logically Blurred: AI-driven Emotional Framing Impairs Human Fallacy Detection
- Authors: Yanran Chen, Lynn Greschner, Roman Klinger, Michael Klenk, Steffen Eger,
- Abstract summary: We present the first computational study of how emotional framing interacts with fallacies and convincingness.<n>We use large language models (LLMs) to systematically change emotional appeals in fallacious arguments.<n>Our work has implications for AI-driven emotional manipulation in the context of fallacious argumentation.
- Score: 25.196971926947906
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Logical fallacies are common in public communication and can mislead audiences; fallacious arguments may still appear convincing despite lacking soundness, because convincingness is inherently subjective. We present the first computational study of how emotional framing interacts with fallacies and convincingness, using large language models (LLMs) to systematically change emotional appeals in fallacious arguments. We benchmark eight LLMs on injecting emotional appeal into fallacious arguments while preserving their logical structures, then use the best models to generate stimuli for a human study. Our results show that LLM-driven emotional framing reduces human fallacy detection in F1 by 14.5% on average. Humans perform better in fallacy detection when perceiving enjoyment than fear or sadness, and these three emotions also correlate with significantly higher convincingness compared to neutral or other emotion states. Our work has implications for AI-driven emotional manipulation in the context of fallacious argumentation.
Related papers
- Reflecting Twice before Speaking with Empathy: Self-Reflective Alternating Inference for Empathy-Aware End-to-End Spoken Dialogue [53.95386201009769]
We introduce EmpathyEval, a descriptive natural-language-based evaluation model for assessing empathetic quality in spoken dialogues.<n>We propose ReEmpathy, an end-to-end Spoken Language Models that enhances empathetic dialogue through a novel Empathetic Self-Reflective Alternating Inference mechanism.
arXiv Detail & Related papers (2026-01-26T09:04:50Z) - Categorical Emotions or Appraisals - Which Emotion Model Explains Argument Convincingness Better? [7.221399245137941]
We argue that the emotion an argument evokes in a recipient is subjective.<n>It depends on the recipient's goals, standards, prior knowledge, and stance.<n>This work presents the first systematic comparison between emotion models for convincingness prediction.
arXiv Detail & Related papers (2025-11-10T14:53:04Z) - Outraged AI: Large language models prioritise emotion over cost in fairness enforcement [13.51400164704227]
We show that large language models (LLMs) use emotion to guide punishment.<n> Unfairness elicited stronger negative emotion that led to more punishment.<n>We propose that future models should integrate emotion with context-sensitive reasoning to achieve human-like emotional intelligence.
arXiv Detail & Related papers (2025-10-17T08:41:36Z) - Do Emotions Really Affect Argument Convincingness? A Dynamic Approach with LLM-based Manipulation Checks [22.464222858889084]
We introduce a dynamic framework inspired by manipulation checks commonly used in psychology and social science.<n>This framework examines the extent to which perceived emotional intensity influences perceived convincingness.<n>We find that in over half of cases, human judgments of convincingness remain unchanged despite variations in perceived emotional intensity.
arXiv Detail & Related papers (2025-02-24T10:04:44Z) - Fearful Falcons and Angry Llamas: Emotion Category Annotations of Arguments by Humans and LLMs [9.088303226909277]
We crowdsource subjective annotations of emotion categories in a German argument corpus and evaluate automatic labeling methods.<n>We find that emotion categories enhance the prediction of emotionality in arguments.<n>Across all prompt settings and models, automatic predictions show a high recall but low precision for predicting anger and fear.
arXiv Detail & Related papers (2024-12-20T15:41:47Z) - ECR-Chain: Advancing Generative Language Models to Better Emotion-Cause Reasoners through Reasoning Chains [61.50113532215864]
Causal Emotion Entailment (CEE) aims to identify the causal utterances in a conversation that stimulate the emotions expressed in a target utterance.
Current works in CEE mainly focus on modeling semantic and emotional interactions in conversations.
We introduce a step-by-step reasoning method, Emotion-Cause Reasoning Chain (ECR-Chain), to infer the stimulus from the target emotional expressions in conversations.
arXiv Detail & Related papers (2024-05-17T15:45:08Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - Empathetic Response Generation through Graph-based Multi-hop Reasoning
on Emotional Causality [13.619616838801006]
Empathetic response generation aims to comprehend the user emotion and then respond to it appropriately.
Most existing works merely focus on what the emotion is and ignore how the emotion is evoked.
We consider the emotional causality, namely, what feelings the user expresses and why the user has such feelings.
arXiv Detail & Related papers (2021-10-09T17:12:41Z) - Perspective-taking and Pragmatics for Generating Empathetic Responses
Focused on Emotion Causes [50.569762345799354]
We argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion from his or her utterance and (ii) reflecting those specific words in the response generation.
Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label.
arXiv Detail & Related papers (2021-09-18T04:22:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.