When Avatars Have Personality: Effects on Engagement and Communication in Immersive Medical Training
- URL: http://arxiv.org/abs/2509.14132v1
- Date: Wed, 17 Sep 2025 16:13:37 GMT
- Title: When Avatars Have Personality: Effects on Engagement and Communication in Immersive Medical Training
- Authors: Julia S. Dollis, Iago A. Brito, Fernanda B. Färber, Pedro S. F. B. Ribeiro, Rafael T. Sousa, Arlindo R. Galvão Filho,
- Abstract summary: This paper introduces a framework that integrates large language models (LLMs) into immersive VR to create medically coherent virtual patients with distinct, consistent personalities.<n>Results demonstrate that the approach is not only feasible but is also perceived by physicians as a highly rewarding and effective training enhancement.
- Score: 35.4537858155201
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While virtual reality (VR) excels at simulating physical environments, its effectiveness for training complex interpersonal skills is limited by a lack of psychologically plausible virtual humans. This is a critical gap in high-stakes domains like medical education, where communication is a core competency. This paper introduces a framework that integrates large language models (LLMs) into immersive VR to create medically coherent virtual patients with distinct, consistent personalities, built on a modular architecture that decouples personality from clinical data. We evaluated our system in a mixed-method, within-subjects study with licensed physicians who engaged in simulated consultations. Results demonstrate that the approach is not only feasible but is also perceived by physicians as a highly rewarding and effective training enhancement. Furthermore, our analysis uncovers critical design principles, including a ``realism-verbosity paradox" where less communicative agents can seem more artificial, and the need for challenges to be perceived as authentic to be instructive. This work provides a validated framework and key insights for developing the next generation of socially intelligent VR training environments.
Related papers
- Artificial Intelligence as a Training Tool in Clinical Psychology: A Comparison of Text-Based and Avatar Simulations [0.0]
This study examined postgraduate clinical psychology students' perceptions of two AI-based simulations.<n>Twenty-four students completed two brief cognitive-behavioural role-plays.<n>Both AI tools were evaluated positively across dimensions.
arXiv Detail & Related papers (2025-11-21T10:09:20Z) - A Voice-Enabled Virtual Patient System for Interactive Training in Standardized Clinical Assessment [0.0]
We introduce a voice-enabled virtual patient simulation system powered by a large language model (LLM)<n>This study describes the system's development and validates its ability to generate virtual patients who adhere to pre-defined clinical profiles.<n>Our findings suggest that LLM-powered virtual patient simulations are a viable and scalable tool for training clinicians.
arXiv Detail & Related papers (2025-11-01T21:18:08Z) - CLiVR: Conversational Learning System in Virtual Reality with AI-Powered Patients [0.0]
CLiVR is a Conversational Learning system in Virtual Reality that integrates large language models, speech processing, and 3D avatars.<n>Developed in Unity and deployed on the Meta Quest 3 platform, CLiVR enables trainees to engage in natural dialogue with virtual patients.
arXiv Detail & Related papers (2025-10-21T19:19:55Z) - MetAdv: A Unified and Interactive Adversarial Testing Platform for Autonomous Driving [85.04826012938642]
MetAdv is a novel adversarial testing platform that enables realistic, dynamic, and interactive evaluation.<n>It supports flexible 3D vehicle modeling and seamless transitions between simulated and physical environments.<n>It enables real-time capture of physiological signals and behavioral feedback from drivers.
arXiv Detail & Related papers (2025-08-04T03:07:54Z) - Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [58.58177409853298]
Current AI systems, such as large language models, remain disembodied, unable to physically engage with the world.<n>At the core of this challenge lies the concept of Neural Brain, a central intelligence system designed to drive embodied agents with human-like adaptability.<n>This paper introduces a unified framework for the Neural Brain of embodied agents, addressing two fundamental challenges.
arXiv Detail & Related papers (2025-05-12T15:05:34Z) - Modeling Challenging Patient Interactions: LLMs for Medical Communication Training [39.67477471073807]
This study proposes the use of Large Language Models (LLMs) to simulate authentic patient communication styles.<n>We developed virtual patients (VPs) that embody nuanced emotional and conversational traits.<n>Medical professionals evaluated these VPs, rating their authenticity (accuser: $3.8 pm 1.0$; rationalizer: $3.7 pm 0.8$ on a 5-point Likert scale (from one to five)) and correctly identifying their styles.
arXiv Detail & Related papers (2025-03-28T09:04:10Z) - MedSimAI: Simulation and Formative Feedback Generation to Enhance Deliberate Practice in Medical Education [0.5068418799871723]
MedSimAI is an AI-powered simulation platform that enables deliberate practice, self-regulated learning, and automated assessment through interactive patient encounters.<n>In a pilot study with 104 first-year medical students, we examined engagement, conversation patterns, and user perceptions.<n>Students found MedSimAI beneficial for repeated, realistic patient-history practice.
arXiv Detail & Related papers (2025-03-01T00:51:55Z) - Integrating Personality into Digital Humans: A Review of LLM-Driven Approaches for Virtual Reality [37.69303106863453]
The integration of large language models (LLMs) into virtual reality (VR) environments has opened new pathways for creating more immersive and interactive digital humans.<n>This paper provides a comprehensive review of methods for enabling digital humans to adopt nuanced personality traits, exploring approaches such as zero-shot, few-shot, and fine-tuning.<n>It highlights the challenges of integrating LLM-driven personality traits into VR, including computational demands, latency issues, and the lack of standardized evaluation frameworks for multimodal interactions.
arXiv Detail & Related papers (2025-02-22T01:33:05Z) - Synthetic Patients: Simulating Difficult Conversations with Multimodal Generative AI for Medical Education [0.0]
Effective patient-centered communication is a core competency for physicians.
Both seasoned providers and medical trainees report decreased confidence in leading conversations on sensitive topics.
We present a novel educational tool designed to facilitate interactive, real-time simulations of difficult conversations in a video-based format.
arXiv Detail & Related papers (2024-05-30T11:02:08Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.