Personality Expression Across Contexts: Linguistic and Behavioral Variation in LLM Agents
- URL: http://arxiv.org/abs/2602.01063v1
- Date: Sun, 01 Feb 2026 07:14:00 GMT
- Title: Personality Expression Across Contexts: Linguistic and Behavioral Variation in LLM Agents
- Authors: Bin Han, Deuksin Kwon, Jonathan Gratch,
- Abstract summary: This study examines how identical personality prompts lead to distinct linguistic, behavioral, and emotional outcomes across four conversational settings.<n>Findings suggest that the same traits are expressed differently depending on social and affective demands.
- Score: 6.123697959900301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) can be conditioned with explicit personality prompts, yet their behavioral realization often varies depending on context. This study examines how identical personality prompts lead to distinct linguistic, behavioral, and emotional outcomes across four conversational settings: ice-breaking, negotiation, group decision, and empathy tasks. Results show that contextual cues systematically influence both personality expression and emotional tone, suggesting that the same traits are expressed differently depending on social and affective demands. This raises an important question for LLM-based dialogue agents: whether such variations reflect inconsistency or context-sensitive adaptation akin to human behavior. Viewed through the lens of Whole Trait Theory, these findings highlight that LLMs exhibit context-sensitive rather than fixed personality expression, adapting flexibly to social interaction goals and affective conditions.
Related papers
- Can LLMs Truly Embody Human Personality? Analyzing AI and Human Behavior Alignment in Dispute Resolution [7.599497643290519]
Large language models (LLMs) are increasingly used to simulate human behavior in social settings.<n>It remains unclear whether these simulations reproduce the personality-behavior patterns observed in humans.
arXiv Detail & Related papers (2026-02-07T07:20:24Z) - Reflecting Twice before Speaking with Empathy: Self-Reflective Alternating Inference for Empathy-Aware End-to-End Spoken Dialogue [53.95386201009769]
We introduce EmpathyEval, a descriptive natural-language-based evaluation model for assessing empathetic quality in spoken dialogues.<n>We propose ReEmpathy, an end-to-end Spoken Language Models that enhances empathetic dialogue through a novel Empathetic Self-Reflective Alternating Inference mechanism.
arXiv Detail & Related papers (2026-01-26T09:04:50Z) - A Unified Spoken Language Model with Injected Emotional-Attribution Thinking for Human-like Interaction [50.05919688888947]
This paper presents a unified spoken language model for emotional intelligence, enhanced by a novel data construction strategy termed Injected Emotional-Attribution Thinking (IEAT)<n>IEAT incorporates user emotional states and their underlying causes into the model's internal reasoning process, enabling emotion-aware reasoning to be internalized rather than treated as explicit supervision.<n> Experiments on the Human-like Spoken Dialogue Systems Challenge (HumDial) Emotional Intelligence benchmark demonstrate that the proposed approach achieves top-ranked performance across emotional trajectory modeling, emotional reasoning, and empathetic response generation.
arXiv Detail & Related papers (2026-01-08T14:07:30Z) - EchoMind: An Interrelated Multi-level Benchmark for Evaluating Empathetic Speech Language Models [47.41816926003011]
Speech Language Models (SLMs) have made significant progress in spoken language understanding.<n>It remains unclear whether SLMs can fully perceive non lexical vocal cues alongside spoken words, and respond with empathy that aligns with both emotional and contextual factors.<n>We present EchoMind, the first interrelated, multi-level benchmark that simulates the cognitive process of empathetic dialogue.
arXiv Detail & Related papers (2025-10-26T17:15:56Z) - Evaluating Behavioral Alignment in Conflict Dialogue: A Multi-Dimensional Comparison of LLM Agents and Humans [3.0760465083020345]
Large Language Models (LLMs) are increasingly deployed in socially complex, interaction-driven tasks.<n>This study assesses the behavioral alignment of personality-prompted LLMs in adversarial dispute resolution.
arXiv Detail & Related papers (2025-09-19T20:15:52Z) - The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs [60.15472325639723]
Personality traits have long been studied as predictors of human behavior.<n>Recent advances in Large Language Models (LLMs) suggest similar patterns may emerge in artificial systems.
arXiv Detail & Related papers (2025-09-03T21:27:10Z) - Do Emotions Really Affect Argument Convincingness? A Dynamic Approach with LLM-based Manipulation Checks [22.464222858889084]
We introduce a dynamic framework inspired by manipulation checks commonly used in psychology and social science.<n>This framework examines the extent to which perceived emotional intensity influences perceived convincingness.<n>We find that in over half of cases, human judgments of convincingness remain unchanged despite variations in perceived emotional intensity.
arXiv Detail & Related papers (2025-02-24T10:04:44Z) - Can LLM Agents Maintain a Persona in Discourse? [3.286711575862228]
Large Language Models (LLMs) are widely used as conversational agents, exploiting their capabilities in various sectors such as education, law, medicine, and more.<n>LLMs are often subjected to context-shifting behaviour, resulting in a lack of consistent and interpretable personality-aligned interactions.<n>We show that while LLMs can be guided toward personality-driven dialogue, their ability to maintain personality traits varies significantly depending on the combination of models and discourse settings.
arXiv Detail & Related papers (2025-02-17T14:36:39Z) - From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations [34.426199139914615]
Large Language Models (LLMs) have revolutionized the generation of emotional support conversations.<n>This paper explores the role of personas in the creation of emotional support conversations.
arXiv Detail & Related papers (2025-02-17T05:24:30Z) - Personality-affected Emotion Generation in Dialog Systems [67.40609683389947]
We propose a new task, Personality-affected Emotion Generation, to generate emotion based on the personality given to the dialog system.
We analyze the challenges in this task, i.e., (1) heterogeneously integrating personality and emotional factors and (2) extracting multi-granularity emotional information in the dialog context.
Results suggest that by adopting our method, the emotion generation performance is improved by 13% in macro-F1 and 5% in weighted-F1 from the BERT-base model.
arXiv Detail & Related papers (2024-04-03T08:48:50Z) - Emotional Listener Portrait: Realistic Listener Motion Simulation in
Conversation [50.35367785674921]
Listener head generation centers on generating non-verbal behaviors of a listener in reference to the information delivered by a speaker.
A significant challenge when generating such responses is the non-deterministic nature of fine-grained facial expressions during a conversation.
We propose the Emotional Listener Portrait (ELP), which treats each fine-grained facial motion as a composition of several discrete motion-codewords.
Our ELP model can not only automatically generate natural and diverse responses toward a given speaker via sampling from the learned distribution but also generate controllable responses with a predetermined attitude.
arXiv Detail & Related papers (2023-09-29T18:18:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.