Exploring human-SAV interaction using LLMs: The impact of psychological factors on user experience
- URL: http://arxiv.org/abs/2504.16548v2
- Date: Fri, 10 Oct 2025 01:55:08 GMT
- Title: Exploring human-SAV interaction using LLMs: The impact of psychological factors on user experience
- Authors: Lirui Guo, Michael G. Burke, Wynita M. Griggs,
- Abstract summary: We investigate how prompt strategies in large language models (LLM)-powered conversational SAV agents affect users' perceptions, experiences, and intentions to adopt technology.<n>We designed four SAV agents with varying levels of anthropomorphic characteristics and psychological ownership triggers.<n>Results indicate that an SAV designed to be more anthropomorphic and to induce psychological ownership improved users' perceptions of the SAV's human-like qualities.
- Score: 1.0195618602298684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been extensive prior work exploring how psychological factors such as anthropomorphism affect the adoption of Shared Autonomous Vehicles (SAVs). However, limited research has been conducted on how prompt strategies in large language models (LLM)-powered conversational SAV agents affect users' perceptions, experiences, and intentions to adopt such technology. In this work, we investigate how conversational SAV agents powered by LLMs drive these psychological factors, such as psychological ownership, the sense of possession a user may come to feel towards an entity or object they may not legally own. We designed four SAV agents with varying levels of anthropomorphic characteristics and psychological ownership triggers. Quantitative measures of psychological ownership, anthropomorphism, quality of service, disclosure tendency, sentiment of SAV responses, and overall acceptance were collected after participants interacted with each SAV. Qualitative feedback was also gathered regarding the experience of psychological ownership during the interactions. The results indicate that an SAV designed to be more anthropomorphic and to induce psychological ownership improved users' perceptions of the SAV's human-like qualities, and its responses were perceived as more positive but also more subjective compared to the control conditions. Qualitative findings support established routes to psychological ownership in the SAV context and suggest that the conversational agent's perceived performance may also influence psychological ownership. Both quantitative and qualitative outcomes highlight the importance of personalization in designing effective SAV interactions. These findings provide practical guidance for designing conversational SAV agents that enhance user experience and adoption.
Related papers
- Mental Health Impacts of AI Companions: Triangulating Social Media Quasi-Experiments, User Perspectives, and Relational Theory [18.716972390545703]
We examined how engaging with AICCs shaped wellbeing and how users perceived these experiences.<n>Findings revealed mixed effects -- greater affective and grief expression, readability, and interpersonal focus.<n>We offer design implications for AI companions that scaffold healthy boundaries, support mindful engagement, support disclosure without dependency, and surface relationship stages.
arXiv Detail & Related papers (2025-09-26T15:47:37Z) - Psychological and behavioural responses in human-agent vs. human-human interactions: a systematic review and meta-analysis [2.3284555894215075]
Interactive intelligent agents are being integrated across society.<n>Despite achieving human-like capabilities, humans' responses to these agents remain poorly understood.<n>We conducted a first systematic synthesis comparing a range of psychological and behavioural responses in matched human-agent vs. human-human dyadic interactions.
arXiv Detail & Related papers (2025-09-25T20:29:36Z) - Towards a Psychoanalytic Perspective on VLM Behaviour: A First-step Interpretation with Intriguing Observations [31.682344633194383]
Hallucination is a long-standing problem that has been actively investigated in Vision-Language Models (VLMs)<n>Existing research commonly attributes hallucinations to technical limitations or sycophancy bias, where the latter means the models tend to generate incorrect answers to align with user expectations.<n>We introduce a psychological taxonomy, categorizing hallucination behaviours, including sycophancy, logical inconsistency, and a newly identified VLMs behaviour: authority bias.
arXiv Detail & Related papers (2025-07-03T19:03:16Z) - Exploring the Impact of Personality Traits on Conversational Recommender Systems: A Simulation with Large Language Models [70.180385882195]
This paper introduces a personality-aware user simulation for Conversational Recommender Systems (CRSs)<n>The user agent induces customizable personality traits and preferences, while the system agent possesses the persuasion capability to simulate realistic interaction in CRSs.<n> Experimental results demonstrate that state-of-the-art LLMs can effectively generate diverse user responses aligned with specified personality traits.
arXiv Detail & Related papers (2025-04-09T13:21:17Z) - AI-Driven Feedback Loops in Digital Technologies: Psychological Impacts on User Behaviour and Well-Being [0.0]
This study aims to investigate the positive and negative psychological consequences of feedback mechanisms on users' behaviour and well-being.
Data-driven feedback loops deliver not only motivational benefits but also psychological challenges.
To mitigate these risks, users should establish boundaries regarding their use of technology to prevent burnout and addiction.
arXiv Detail & Related papers (2024-10-30T17:11:30Z) - Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - "I Like Sunnie More Than I Expected!": Exploring User Expectation and Perception of an Anthropomorphic LLM-based Conversational Agent for Well-Being Support [24.016765989800955]
This study compared users' initial expectations against their post-interaction perceptions of two large language models (LLMs)
Results showed that user engagement was high with both systems, and both systems exceeded users' expectations along the utility dimension.
These findings suggest that anthropomorphic conversational interaction design may be particularly effective in fostering warmth in mental health support contexts.
arXiv Detail & Related papers (2024-05-22T16:30:24Z) - VCounselor: A Psychological Intervention Chat Agent Based on a Knowledge-Enhanced Large Language Model [1.0055768887247036]
The main objective of this study is to improve the effectiveness and credibility of the large language model in psychological intervention.
We achieved this goal by proposing a new affective interaction structure and knowledge-enhancement structure.
The comparison results indicated that the affective interaction structure and knowledge-enhancement structure of VCounselor significantly improved the effectiveness and credibility of the psychological intervention.
arXiv Detail & Related papers (2024-03-20T12:46:02Z) - PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents [68.50571379012621]
Psychological measurement is essential for mental health, self-understanding, and personal development.
PsychoGAT (Psychological Game AgenTs) achieves statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity.
arXiv Detail & Related papers (2024-02-19T18:00:30Z) - Affective Conversational Agents: Understanding Expectations and Personal
Influences [17.059654991560105]
We surveyed 745 respondents to understand the expectations and preferences regarding affective skills in various applications.
Our results indicate a preference for scenarios that involve human interaction, emotional support, and creative tasks.
Overall, the desired affective skills in AI agents depend largely on the application's context and nature.
arXiv Detail & Related papers (2023-10-19T04:33:18Z) - Anthropomorphization of AI: Opportunities and Risks [24.137106159123892]
Anthropomorphization is the tendency to attribute human-like traits to non-human entities.
With widespread adoption of AI systems, the tendency for users to anthropomorphize it increases significantly.
We study the objective legal implications, as analyzed through the lens of the recent blueprint of AI bill of rights.
arXiv Detail & Related papers (2023-05-24T06:39:45Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Towards Persona-Based Empathetic Conversational Models [58.65492299237112]
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains.
In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy.
We propose a new task towards persona-based empathetic conversations and present the first empirical study on the impact of persona on empathetic responding.
arXiv Detail & Related papers (2020-04-26T08:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.