Incongruent Positivity: When Miscalibrated Positivity Undermines Online Supportive Conversations
- URL: http://arxiv.org/abs/2509.10184v1
- Date: Fri, 12 Sep 2025 12:25:02 GMT
- Title: Incongruent Positivity: When Miscalibrated Positivity Undermines Online Supportive Conversations
- Authors: Leen Almajed, Abeer ALdayel,
- Abstract summary: In emotionally supportive conversations, well-intended positivity can sometimes misfire, leading to responses that feel dismissive, minimizing, or unrealistically optimistic.<n>We examine this phenomenon of incongruent positivity as miscalibrated expressions of positive support in both human and LLM generated responses.<n>Our findings shed light on the need to move beyond merely generating generic positive responses and instead study the congruent support measures to balance positive affect with emotional acknowledgment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In emotionally supportive conversations, well-intended positivity can sometimes misfire, leading to responses that feel dismissive, minimizing, or unrealistically optimistic. We examine this phenomenon of incongruent positivity as miscalibrated expressions of positive support in both human and LLM generated responses. To this end, we collected real user-assistant dialogues from Reddit across a range of emotional intensities and generated additional responses using large language models for the same context. We categorize these conversations by intensity into two levels: Mild, which covers relationship tension and general advice, and Severe, which covers grief and anxiety conversations. This level of categorization enables a comparative analysis of how supportive responses vary across lower and higher stakes contexts. Our analysis reveals that LLMs are more prone to unrealistic positivity through dismissive and minimizing tone, particularly in high-stakes contexts. To further study the underlying dimensions of this phenomenon, we finetune LLMs on datasets with strong and weak emotional reactions. Moreover, we developed a weakly supervised multilabel classifier ensemble (DeBERTa and MentalBERT) that shows improved detection of incongruent positivity types across two sorts of concerns (Mild and Severe). Our findings shed light on the need to move beyond merely generating generic positive responses and instead study the congruent support measures to balance positive affect with emotional acknowledgment. This approach offers insights into aligning large language models with affective expectations in the online supportive dialogue, paving the way toward context-aware and trust preserving online conversation systems.
Related papers
- Mitigating Conversational Inertia in Multi-Turn Agents [47.35031006899519]
We identify conversational inertia, a phenomenon where models exhibit strong diagonal attention to previous responses.<n>We propose Context Preference Learning to calibrate model preferences to favor low-inertia responses over highinertia ones.
arXiv Detail & Related papers (2026-02-03T15:47:32Z) - Reflecting Twice before Speaking with Empathy: Self-Reflective Alternating Inference for Empathy-Aware End-to-End Spoken Dialogue [53.95386201009769]
We introduce EmpathyEval, a descriptive natural-language-based evaluation model for assessing empathetic quality in spoken dialogues.<n>We propose ReEmpathy, an end-to-end Spoken Language Models that enhances empathetic dialogue through a novel Empathetic Self-Reflective Alternating Inference mechanism.
arXiv Detail & Related papers (2026-01-26T09:04:50Z) - Affective Multimodal Agents with Proactive Knowledge Grounding for Emotionally Aligned Marketing Dialogue [3.780355670921318]
AffectMind is a multimodal affective dialogue agent that performs proactive reasoning and dynamic knowledge grounding to sustain emotionally aligned and persuasive interactions.<n>Experiments show that AffectMind outperforms strong LLM-based baselines in emotional consistency, persuasive success rate, and long-term user engagement.
arXiv Detail & Related papers (2025-11-21T04:16:45Z) - Investigating Safety Vulnerabilities of Large Audio-Language Models Under Speaker Emotional Variations [94.62792643569567]
This work systematically investigates the role of speaker emotion.<n>We construct a dataset of malicious speech instructions expressed across multiple emotions and intensities, and evaluate several state-of-the-art LALMs.<n>Our results reveal substantial safety inconsistencies: different emotions elicit varying levels of unsafe responses, and the effect of intensity is non-monotonic, with medium expressions often posing the greatest risk.
arXiv Detail & Related papers (2025-10-19T15:41:25Z) - Evaluating & Reducing Deceptive Dialogue From Language Models with Multi-turn RL [64.3268313484078]
Large Language Models (LLMs) interact with millions of people worldwide in applications such as customer support, education and healthcare.<n>Their ability to produce deceptive outputs, whether intentionally or inadvertently, poses significant safety concerns.<n>We investigate the extent to which LLMs engage in deception within dialogue, and propose the belief misalignment metric to quantify deception.
arXiv Detail & Related papers (2025-10-16T05:29:36Z) - Ensembling Large Language Models to Characterize Affective Dynamics in Student-AI Tutor Dialogues [18.497635186707008]
This work introduces the first ensemble-LLM framework for large-scale affect sensing in tutoring dialogues.<n>We analyzed two semesters' worth of 16,986 conversational turns exchanged between PyTutor, an AI tutor, and 261 undergraduate learners across three U.S. institutions.
arXiv Detail & Related papers (2025-10-13T04:43:56Z) - Conversations: Love Them, Hate Them, Steer Them [10.014248704653]
Large Language Models (LLMs) demonstrate increasing conversational fluency, yet instilling them with nuanced, human-like emotional expression remains a significant challenge.<n>This paper demonstrates that targeted activation engineering can steer LLaMA 3.1-8B to exhibit more human-like emotional nuances.
arXiv Detail & Related papers (2025-05-23T02:58:45Z) - Objective quantification of mood states using large language models [0.0]
Large Language Models (LLMs) showcase an excellent level of response consistency across wide-ranging contexts.<n>We leverage these parallels to establish a framework for quantifying mental states.
arXiv Detail & Related papers (2025-02-13T16:52:06Z) - Consistency of Responses and Continuations Generated by Large Language Models on Social Media [9.809922019554461]
Large Language Models (LLMs) demonstrate remarkable capabilities in text generation, yet their emotional consistency and semantic coherence in social media contexts remain insufficiently understood.<n>This study investigates how LLMs handle emotional content and maintain semantic relationships through continuation and response tasks using two open-source models: Gemma and Llama.
arXiv Detail & Related papers (2025-01-14T13:19:47Z) - Can LLMs Understand the Implication of Emphasized Sentences in Dialogue? [64.72966061510375]
Emphasis is a crucial component in human communication, which indicates the speaker's intention and implication beyond pure text in dialogue.
This paper introduces Emphasized-Talk, a benchmark with emphasis-annotated dialogue samples capturing the implications of emphasis.
We evaluate various Large Language Models (LLMs), both open-source and commercial, to measure their performance in understanding emphasis.
arXiv Detail & Related papers (2024-06-16T20:41:44Z) - Empathy Through Multimodality in Conversational Interfaces [1.360649555639909]
Conversational Health Agents (CHAs) are redefining healthcare by offering nuanced support that transcends textual analysis to incorporate emotional intelligence.
This paper introduces an LLM-based CHA engineered for rich, multimodal dialogue-especially in the realm of mental health support.
It adeptly interprets and responds to users' emotional states by analyzing multimodal cues, thus delivering contextually aware and empathetically resonant verbal responses.
arXiv Detail & Related papers (2024-05-08T02:48:29Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - DRESS: Instructing Large Vision-Language Models to Align and Interact with Humans via Natural Language Feedback [61.28463542324576]
We present DRESS, a large vision language model (LVLM) that innovatively exploits Natural Language feedback (NLF) from Large Language Models.
We propose a novel categorization of the NLF into two key types: critique and refinement.
Our experimental results demonstrate that DRESS can generate more helpful (9.76%), honest (11.52%), and harmless (21.03%) responses.
arXiv Detail & Related papers (2023-11-16T18:37:29Z) - Facilitating Multi-turn Emotional Support Conversation with Positive
Emotion Elicitation: A Reinforcement Learning Approach [58.88422314998018]
Emotional support conversation (ESC) aims to provide emotional support (ES) to improve one's mental state.
Existing works stay at fitting grounded responses and responding strategies which ignore the effect on ES and lack explicit goals to guide emotional positive transition.
We introduce a new paradigm to formalize multi-turn ESC as a process of positive emotion elicitation.
arXiv Detail & Related papers (2023-07-16T09:58:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.