The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions
- URL: http://arxiv.org/abs/2602.17185v1
- Date: Thu, 19 Feb 2026 09:10:41 GMT
- Title: The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions
- Authors: Uğur Genç, Heng Gu, Chadha Degachi, Evangelos Niforatos, Senthil Chandrasegaran, Himanshu Verma,
- Abstract summary: Large Language Model-powered conversational agents (CAs) are increasingly capable of projecting sophisticated personalities through language.<n>We examine how CA personalities expressed linguistically affect user decisions and perceptions in the context of charitable giving.<n>Our findings emphasize the risks CAs pose as instruments of manipulation, subtly influencing user perceptions and decisions.
- Score: 14.362949339129637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Model-powered conversational agents (CAs) are increasingly capable of projecting sophisticated personalities through language, but how these projections affect users is unclear. We thus examine how CA personalities expressed linguistically affect user decisions and perceptions in the context of charitable giving. In a crowdsourced study, 360 participants interacted with one of eight CAs, each projecting a personality composed of three linguistic aspects: attitude (optimistic/pessimistic), authority (authoritative/submissive), and reasoning (emotional/rational). While the CA's composite personality did not affect participants' decisions, it did affect their perceptions and emotional responses. Particularly, participants interacting with pessimistic CAs felt lower emotional state and lower affinity towards the cause, perceived the CA as less trustworthy and less competent, and yet tended to donate more toward the charity. Perceptions of trust, competence, and situational empathy significantly predicted donation decisions. Our findings emphasize the risks CAs pose as instruments of manipulation, subtly influencing user perceptions and decisions.
Related papers
- Vibe Check: Understanding the Effects of LLM-Based Conversational Agents' Personality and Alignment on User Perceptions in Goal-Oriented Tasks [2.1117030125341385]
Large language models (LLMs) enable conversational agents (CAs) to express distinctive personalities.<n>This study investigates how personality expression levels and user-agent personality alignment influence perceptions in goal-oriented tasks.
arXiv Detail & Related papers (2025-09-11T21:43:49Z) - Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models [47.110656690979695]
We present the first comprehensive study on the role of persona prompts in hate speech classification.<n>A human annotation survey confirms that MBTI dimensions significantly affect labeling behavior.<n>Our analysis uncovers substantial persona-driven variation, including inconsistencies with ground truth, inter-persona disagreement, and logit-level biases.
arXiv Detail & Related papers (2025-06-10T09:02:55Z) - Must Read: A Systematic Survey of Computational Persuasion [60.83151988635103]
AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence.<n>Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion.
arXiv Detail & Related papers (2025-05-12T17:26:31Z) - Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models [75.85319609088354]
Sentient Agent as a Judge (SAGE) is an evaluation framework for large language models.<n>SAGE instantiates a Sentient Agent that simulates human-like emotional changes and inner thoughts during interaction.<n>SAGE provides a principled, scalable and interpretable tool for tracking progress toward genuinely empathetic and socially adept language agents.
arXiv Detail & Related papers (2025-05-01T19:06:10Z) - Exploring the Impact of Personality Traits on Conversational Recommender Systems: A Simulation with Large Language Models [70.180385882195]
This paper introduces a personality-aware user simulation for Conversational Recommender Systems (CRSs)<n>The user agent induces customizable personality traits and preferences, while the system agent possesses the persuasion capability to simulate realistic interaction in CRSs.<n> Experimental results demonstrate that state-of-the-art LLMs can effectively generate diverse user responses aligned with specified personality traits.
arXiv Detail & Related papers (2025-04-09T13:21:17Z) - Human Decision-making is Susceptible to AI-driven Manipulation [87.24007555151452]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.<n>This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - The Dark Patterns of Personalized Persuasion in Large Language Models: Exposing Persuasive Linguistic Features for Big Five Personality Traits in LLMs Responses [0.0]
We identify 13 linguistic features crucial for influencing personalities across different levels of the Big Five model of personality.
Findings show that models use more anxiety-related words for neuroticism, increase achievement-related words for conscientiousness, and employ fewer cognitive processes words for openness to experience.
arXiv Detail & Related papers (2024-11-08T23:02:59Z) - Large Language Models Can Infer Personality from Free-Form User Interactions [0.0]
GPT-4 can infer personality with moderate accuracy, outperforming previous approaches.
Results show that the direct focus on personality assessment did not result in a less positive user experience.
Preliminary analyses suggest that the accuracy of personality inferences varies only marginally across different socio-demographic subgroups.
arXiv Detail & Related papers (2024-05-19T20:33:36Z) - Affective-NLI: Towards Accurate and Interpretable Personality Recognition in Conversation [30.820334868031537]
Personality Recognition in Conversation (PRC) aims to identify the personality traits of speakers through textual dialogue content.
We propose Affective Natural Language Inference (Affective-NLI) for accurate and interpretable PRC.
arXiv Detail & Related papers (2024-04-03T09:14:24Z) - Understanding Public Perceptions of AI Conversational Agents: A
Cross-Cultural Analysis [22.93365830074122]
Conversational Agents (CAs) have increasingly been integrated into everyday life, sparking significant discussions on social media.
This study used computational methods to analyze about one million social media discussions surrounding CAs.
We find Chinese participants tended to view CAs hedonically, perceived voice-based and physically embodied CAs as warmer and more competent.
arXiv Detail & Related papers (2024-02-25T09:34:22Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.