Vibe Check: Understanding the Effects of LLM-Based Conversational Agents' Personality and Alignment on User Perceptions in Goal-Oriented Tasks
- URL: http://arxiv.org/abs/2509.09870v1
- Date: Thu, 11 Sep 2025 21:43:49 GMT
- Title: Vibe Check: Understanding the Effects of LLM-Based Conversational Agents' Personality and Alignment on User Perceptions in Goal-Oriented Tasks
- Authors: Hasibur Rahman, Smit Desai,
- Abstract summary: Large language models (LLMs) enable conversational agents (CAs) to express distinctive personalities.<n>This study investigates how personality expression levels and user-agent personality alignment influence perceptions in goal-oriented tasks.
- Score: 2.1117030125341385
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) enable conversational agents (CAs) to express distinctive personalities, raising new questions about how such designs shape user perceptions. This study investigates how personality expression levels and user-agent personality alignment influence perceptions in goal-oriented tasks. In a between-subjects experiment (N=150), participants completed travel planning with CAs exhibiting low, medium, or high expression across the Big Five traits, controlled via our novel Trait Modulation Keys framework. Results revealed an inverted-U relationship: medium expression produced the most positive evaluations across Intelligence, Enjoyment, Anthropomorphism, Intention to Adopt, Trust, and Likeability, significantly outperforming both extremes. Personality alignment further enhanced outcomes, with Extraversion and Emotional Stability emerging as the most influential traits. Cluster analysis identified three distinct compatibility profiles, with "Well-Aligned" users reporting substantially positive perceptions. These findings demonstrate that personality expression and strategic trait alignment constitute optimal design targets for CA personality, offering design implications as LLM-based CAs become increasingly prevalent.
Related papers
- The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions [14.362949339129637]
Large Language Model-powered conversational agents (CAs) are increasingly capable of projecting sophisticated personalities through language.<n>We examine how CA personalities expressed linguistically affect user decisions and perceptions in the context of charitable giving.<n>Our findings emphasize the risks CAs pose as instruments of manipulation, subtly influencing user perceptions and decisions.
arXiv Detail & Related papers (2026-02-19T09:10:41Z) - PERSONA: Dynamic and Compositional Inference-Time Personality Control via Activation Vector Algebra [84.59328460968872]
Current methods for personality control in Large Language Models rely on static prompting or expensive fine-tuning.<n>We introduce PERSONA, a training-free framework that achieves fine-tuning level performance through direct manipulation of personality vectors.<n>On PersonalityBench, our approach achieves a mean score of 9.60, nearly matching the supervised fine-tuning upper bound of 9.61 without any gradient updates.
arXiv Detail & Related papers (2026-02-17T15:47:58Z) - Personality as Relational Infrastructure: User Perceptions of Personality-Trait-Infused LLM Messaging [0.6999740786886536]
We show that personality-based personalisation in behaviour change systems may operate primarily through aggregate exposure rather than per-message.<n>In-situ longitudinal studies are needed to validate these findings in real-world contexts.
arXiv Detail & Related papers (2026-02-06T10:47:47Z) - Enhancing Personality Recognition by Comparing the Predictive Power of Traits, Facets, and Nuances [37.83859643892549]
Personality recognition models aim to infer personality traits from different sources of behavioral data.<n>We trained a transformer-based model including cross-modal (audiovisual) and cross-subject (dyad-aware) attention mechanisms.<n>Results show that nuance-level models consistently outperform facet and trait-level models, reducing mean squared error by up to 74% across interaction scenarios.
arXiv Detail & Related papers (2026-02-05T13:35:04Z) - The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs [60.15472325639723]
Personality traits have long been studied as predictors of human behavior.<n>Recent advances in Large Language Models (LLMs) suggest similar patterns may emerge in artificial systems.
arXiv Detail & Related papers (2025-09-03T21:27:10Z) - Beyond Self-Reports: Multi-Observer Agents for Personality Assessment in Large Language Models [2.7010154811483167]
This paper proposes a novel multi-observer framework for personality trait assessments in LLM agents.<n>Instead of relying on self-assessments, we employ multiple observer agents.<n>We show that these observer-report ratings align more closely with human judgments than traditional self-assessments.
arXiv Detail & Related papers (2025-04-11T10:03:55Z) - Exploring the Impact of Personality Traits on Conversational Recommender Systems: A Simulation with Large Language Models [70.180385882195]
This paper introduces a personality-aware user simulation for Conversational Recommender Systems (CRSs)<n>The user agent induces customizable personality traits and preferences, while the system agent possesses the persuasion capability to simulate realistic interaction in CRSs.<n> Experimental results demonstrate that state-of-the-art LLMs can effectively generate diverse user responses aligned with specified personality traits.
arXiv Detail & Related papers (2025-04-09T13:21:17Z) - Evaluating Large Language Models with Psychometrics [59.821829073478376]
This paper offers a comprehensive benchmark for quantifying psychological constructs of Large Language Models (LLMs)<n>Our work identifies five key psychological constructs -- personality, values, emotional intelligence, theory of mind, and self-efficacy -- assessed through a suite of 13 datasets.<n>We uncover significant discrepancies between LLMs' self-reported traits and their response patterns in real-world scenarios, revealing complexities in their behaviors.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - P-React: Synthesizing Topic-Adaptive Reactions of Personality Traits via Mixture of Specialized LoRA Experts [34.374681921626205]
We propose P-React, a mixture of experts (MoE)-based personalized large language models.<n> Particularly, we integrate a Personality Loss (PSL) to better capture individual trait expressions.<n>To facilitate research in this field, we curate OCEAN-Chat, a high-quality, human-verified dataset.
arXiv Detail & Related papers (2024-06-18T12:25:13Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Editing Personality for Large Language Models [73.59001811199823]
This paper introduces an innovative task focused on editing the personality traits of Large Language Models (LLMs)
We construct PersonalityEdit, a new benchmark dataset to address this task.
arXiv Detail & Related papers (2023-10-03T16:02:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.