CLiVR: Conversational Learning System in Virtual Reality with AI-Powered Patients
- URL: http://arxiv.org/abs/2510.19031v1
- Date: Tue, 21 Oct 2025 19:19:55 GMT
- Title: CLiVR: Conversational Learning System in Virtual Reality with AI-Powered Patients
- Authors: Akilan Amithasagaran, Sagnik Dakshit, Bhavani Suryadevara, Lindsey Stockton,
- Abstract summary: CLiVR is a Conversational Learning system in Virtual Reality that integrates large language models, speech processing, and 3D avatars.<n>Developed in Unity and deployed on the Meta Quest 3 platform, CLiVR enables trainees to engage in natural dialogue with virtual patients.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulations constitute a fundamental component of medical and nursing education and traditionally employ standardized patients (SP) and high-fidelity manikins to develop clinical reasoning and communication skills. However, these methods require substantial resources, limiting accessibility and scalability. In this study, we introduce CLiVR, a Conversational Learning system in Virtual Reality that integrates large language models (LLMs), speech processing, and 3D avatars to simulate realistic doctor-patient interactions. Developed in Unity and deployed on the Meta Quest 3 platform, CLiVR enables trainees to engage in natural dialogue with virtual patients. Each simulation is dynamically generated from a syndrome-symptom database and enhanced with sentiment analysis to provide feedback on communication tone. Through an expert user study involving medical school faculty (n=13), we assessed usability, realism, and perceived educational impact. Results demonstrated strong user acceptance, high confidence in educational potential, and valuable feedback for improvement. CLiVR offers a scalable, immersive supplement to SP-based training.
Related papers
- An Agentic AI Framework for Training General Practitioner Student Skills [1.8865968025608468]
We introduce an agentic framework for training general practitioner student skills that unifies, evidence-based vignette generation, controlled persona-driven patient dialogue, and standards-based assessment and feedback.<n> participants reported realistic and vignette-faithful dialogue, appropriate difficulty calibration, a stable personality signal, and highly useful example-rich feedback.<n>These results support agentic separation of scenario control, interaction control, and standards-based assessment as a practical pattern for building dependable and pedagogically valuable training tools.
arXiv Detail & Related papers (2025-12-20T17:26:39Z) - The Imperfect Learner: Incorporating Developmental Trajectories in Memory-based Student Simulation [55.722188569369656]
This paper introduces a novel framework for memory-based student simulation.<n>It incorporates developmental trajectories through a hierarchical memory mechanism with structured knowledge representation.<n>In practice, we implement a curriculum-aligned simulator grounded on the Next Generation Science Standards.
arXiv Detail & Related papers (2025-11-08T08:05:43Z) - A Voice-Enabled Virtual Patient System for Interactive Training in Standardized Clinical Assessment [0.0]
We introduce a voice-enabled virtual patient simulation system powered by a large language model (LLM)<n>This study describes the system's development and validates its ability to generate virtual patients who adhere to pre-defined clinical profiles.<n>Our findings suggest that LLM-powered virtual patient simulations are a viable and scalable tool for training clinicians.
arXiv Detail & Related papers (2025-11-01T21:18:08Z) - When Avatars Have Personality: Effects on Engagement and Communication in Immersive Medical Training [35.4537858155201]
This paper introduces a framework that integrates large language models (LLMs) into immersive VR to create medically coherent virtual patients with distinct, consistent personalities.<n>Results demonstrate that the approach is not only feasible but is also perceived by physicians as a highly rewarding and effective training enhancement.
arXiv Detail & Related papers (2025-09-17T16:13:37Z) - MetAdv: A Unified and Interactive Adversarial Testing Platform for Autonomous Driving [63.875372281596576]
MetAdv is a novel adversarial testing platform that enables realistic, dynamic, and interactive evaluation.<n>It supports flexible 3D vehicle modeling and seamless transitions between simulated and physical environments.<n>It enables real-time capture of physiological signals and behavioral feedback from drivers.
arXiv Detail & Related papers (2025-08-04T03:07:54Z) - Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset [113.25650486482762]
We introduce the Seamless Interaction dataset, a large-scale collection of over 4,000 hours of face-to-face interaction footage.<n>This dataset enables the development of AI technologies that understand dyadic embodied dynamics.<n>We develop a suite of models that utilize the dataset to generate dyadic motion gestures and facial expressions aligned with human speech.
arXiv Detail & Related papers (2025-06-27T18:09:49Z) - Towards user-centered interactive medical image segmentation in VR with an assistive AI agent [0.5578116134031106]
We propose SAMIRA, a novel conversational AI agent for medical VR that assists users with localizing, segmenting, and visualizing 3D medical concepts.<n>The system also supports true-to-scale 3D visualization of segmented pathology to enhance patient-specific anatomical understanding.<n>With a user study, evaluations demonstrated a high usability score (SUS=90.0 $pm$ 9.0), low overall task load, and strong support for the proposed VR system's guidance.
arXiv Detail & Related papers (2025-05-12T03:47:05Z) - MedSimAI: Simulation and Formative Feedback Generation to Enhance Deliberate Practice in Medical Education [0.5068418799871723]
MedSimAI is an AI-powered simulation platform that enables deliberate practice, self-regulated learning, and automated assessment through interactive patient encounters.<n>In a pilot study with 104 first-year medical students, we examined engagement, conversation patterns, and user perceptions.<n>Students found MedSimAI beneficial for repeated, realistic patient-history practice.
arXiv Detail & Related papers (2025-03-01T00:51:55Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Leveraging Large Language Model as Simulated Patients for Clinical Education [18.67200160979337]
High cost of training and hiring qualified SPs limit students' access to this type of clinical training.
With the rapid development of Large Language Models (LLMs), their exceptional capabilities in conversational artificial intelligence and role-playing have been demonstrated.
We present an integrated model-agnostic framework called CureFun that harnesses the potential of LLMs in clinical medical education.
arXiv Detail & Related papers (2024-04-13T06:36:32Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - VIRT: Improving Representation-based Models for Text Matching through
Virtual Interaction [50.986371459817256]
We propose a novel textitVirtual InteRacTion mechanism, termed as VIRT, to enable full and deep interaction modeling in representation-based models.
VIRT asks representation-based encoders to conduct virtual interactions to mimic the behaviors as interaction-based models do.
arXiv Detail & Related papers (2021-12-08T09:49:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.