Designing and Evaluating an AI-driven Immersive Multidisciplinary Simulation (AIMS) for Interprofessional Education
- URL: http://arxiv.org/abs/2510.08891v1
- Date: Fri, 10 Oct 2025 01:09:18 GMT
- Title: Designing and Evaluating an AI-driven Immersive Multidisciplinary Simulation (AIMS) for Interprofessional Education
- Authors: Ruijie Wang, Jie Lu, Bo Pei, Evonne Jones, Jamey Brinson, Timothy Brown,
- Abstract summary: AIMS is a virtual simulation that integrates a large language model (Gemini-2.5-Flash), a Unity-based virtual environment engine, and a character creation pipeline.<n>AIMS was designed to enhance collaborative clinical reasoning and health promotion competencies among students from pharmacy, medicine, nursing, and social work.
- Score: 9.141423579002542
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interprofessional education has long relied on case studies and the use of standardized patients to support teamwork, communication, and related collaborative competencies among healthcare professionals. However, traditional approaches are often limited by cost, scalability, and inability to mimic the dynamic complexity of real-world clinical scenarios. To address these challenges, we designed and developed AIMS (AI-Enhanced Immersive Multidisciplinary Simulations), a virtual simulation that integrates a large language model (Gemini-2.5-Flash), a Unity-based virtual environment engine, and a character creation pipeline to support synchronized, multimodal interactions between the user and the virtual patient. AIMS was designed to enhance collaborative clinical reasoning and health promotion competencies among students from pharmacy, medicine, nursing, and social work. A formal usability testing session was conducted which participants assumed professional roles on a healthcare team and engaged in a mix of scripted and unscripted conversations. Participants explored the patient's symptoms, social context, and care needs. Usability issues were identified (e.g., audio routing, response latency) and used to guide subsequent refinements. Findings in general suggest that AIMS supports realistic, profession-specific and contextually appropriate conversations. We discussed both technical and pedagogical innovations of AIMS and concluded with future directions.
Related papers
- Integrating Virtual Reality and Large Language Models for Team-Based Non-Technical Skills Training and Evaluation in the Operating Room [0.5943985865141843]
We introduce the Virtual Operating Room Team Experience (VOR), a multi-user virtual reality (VR) platform that integrates immersive team simulation with behavioral analytics.<n>VOR provides a scalable, privacy-compliant framework for objective assessment and automated, data-informed debriefing.<n>Twelve surgical professionals completed pilot sessions at the 2024S conference, rating VOR as intuitive, immersive, and valuable for developing teamwork and communication.
arXiv Detail & Related papers (2026-01-19T21:34:00Z) - An Agentic AI Framework for Training General Practitioner Student Skills [1.8865968025608468]
We introduce an agentic framework for training general practitioner student skills that unifies, evidence-based vignette generation, controlled persona-driven patient dialogue, and standards-based assessment and feedback.<n> participants reported realistic and vignette-faithful dialogue, appropriate difficulty calibration, a stable personality signal, and highly useful example-rich feedback.<n>These results support agentic separation of scenario control, interaction control, and standards-based assessment as a practical pattern for building dependable and pedagogically valuable training tools.
arXiv Detail & Related papers (2025-12-20T17:26:39Z) - A Knowledge-driven Adaptive Collaboration of LLMs for Enhancing Medical Decision-making [49.048767633316764]
KAMAC is a knowledge-driven Adaptive Multi-Agent Collaboration framework.<n>It enables agents to dynamically form and expand expert teams based on the evolving diagnostic context.<n> Experiments on two real-world medical benchmarks demonstrate that KAMAC significantly outperforms both single-agent and advanced multi-agent methods.
arXiv Detail & Related papers (2025-09-18T14:33:36Z) - When Avatars Have Personality: Effects on Engagement and Communication in Immersive Medical Training [35.4537858155201]
This paper introduces a framework that integrates large language models (LLMs) into immersive VR to create medically coherent virtual patients with distinct, consistent personalities.<n>Results demonstrate that the approach is not only feasible but is also perceived by physicians as a highly rewarding and effective training enhancement.
arXiv Detail & Related papers (2025-09-17T16:13:37Z) - Designing AI Tools for Clinical Care Teams to Support Serious Illness Conversations with Older Adults in the Emergency Department [53.52248484568777]
The work contributes empirical understanding of ED-based serious illness conversations and provides design considerations for AI in high-stakes clinical environments.<n>We conducted interviews with two domain experts and nine ED clinical care team members.<n>We characterized a four-phase serious illness conversation workflow (identification, preparation, conduction, documentation) and identified key needs and challenges at each stage.<n>We present design guidelines for AI tools supporting SIC that fit within existing clinical practices.
arXiv Detail & Related papers (2025-05-30T21:15:57Z) - AI Standardized Patient Improves Human Conversations in Advanced Cancer Care [1.5631689124757961]
SOPHIE is an AI-powered standardized patient simulation and automated feedback system.<n>In a randomized control study with healthcare students and professionals, SOPHIE users demonstrated significant improvement across three critical SIC domains: Empathize, Be Explicit, and Empower.
arXiv Detail & Related papers (2025-05-05T14:44:17Z) - MedSimAI: Simulation and Formative Feedback Generation to Enhance Deliberate Practice in Medical Education [0.5068418799871723]
MedSimAI is an AI-powered simulation platform that enables deliberate practice, self-regulated learning, and automated assessment through interactive patient encounters.<n>In a pilot study with 104 first-year medical students, we examined engagement, conversation patterns, and user perceptions.<n>Students found MedSimAI beneficial for repeated, realistic patient-history practice.
arXiv Detail & Related papers (2025-03-01T00:51:55Z) - Towards Next-Generation Medical Agent: How o1 is Reshaping Decision-Making in Medical Scenarios [46.729092855387165]
We study the choice of the backbone LLM for medical AI agents, which is the foundation for the agent's overall reasoning and action generation.<n>Our findings demonstrate o1's ability to enhance diagnostic accuracy and consistency, paving the way for smarter, more responsive AI tools.
arXiv Detail & Related papers (2024-11-16T18:19:53Z) - Demystifying Large Language Models for Medicine: A Primer [50.83806796466396]
Large language models (LLMs) represent a transformative class of AI tools capable of revolutionizing various aspects of healthcare.
This tutorial aims to equip healthcare professionals with the tools necessary to effectively integrate LLMs into clinical practice.
arXiv Detail & Related papers (2024-10-24T15:41:56Z) - Synthetic Patients: Simulating Difficult Conversations with Multimodal Generative AI for Medical Education [0.0]
Effective patient-centered communication is a core competency for physicians.
Both seasoned providers and medical trainees report decreased confidence in leading conversations on sensitive topics.
We present a novel educational tool designed to facilitate interactive, real-time simulations of difficult conversations in a video-based format.
arXiv Detail & Related papers (2024-05-30T11:02:08Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Chain-of-Interaction: Enhancing Large Language Models for Psychiatric Behavior Understanding by Dyadic Contexts [4.403408362362806]
We introduce the Chain-of-Interaction prompting method to contextualize large language models for psychiatric decision support by the dyadic interactions.
This approach enables large language models to leverage the coding scheme, patient state, and domain knowledge for patient behavioral coding.
arXiv Detail & Related papers (2024-03-20T17:47:49Z) - Towards Ubiquitous Semantic Metaverse: Challenges, Approaches, and
Opportunities [68.03971716740823]
In recent years, ubiquitous semantic Metaverse has been studied to revolutionize immersive cyber-virtual experiences for augmented reality (AR) and virtual reality (VR) users.
This survey focuses on the representation and intelligence for the four fundamental system components in ubiquitous Metaverse.
arXiv Detail & Related papers (2023-07-13T11:14:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.