Adaptive-VP: A Framework for LLM-Based Virtual Patients that Adapts to Trainees' Dialogue to Facilitate Nurse Communication Training
- URL: http://arxiv.org/abs/2506.00386v1
- Date: Sat, 31 May 2025 04:34:55 GMT
- Title: Adaptive-VP: A Framework for LLM-Based Virtual Patients that Adapts to Trainees' Dialogue to Facilitate Nurse Communication Training
- Authors: Keyeun Lee, Seolhee Lee, Esther Hehsun Kim, Yena Ko, Jinsu Eun, Dahee Kim, Hyewon Cho, Haiyi Zhu, Robert E. Kraut, Eunyoung Suh, Eun-mee Kim, Hajin Lim,
- Abstract summary: Adaptive-VP is a dialogue generation framework that dynamically adapts VP behavior based on trainee input.<n>It features a pipeline for constructing clinically grounded yet flexible VP scenarios and a modular system for assessing trainee communication and adjusting VP responses in real time.<n>A corpus from practicing nurses showed that our communication skill evaluation mechanism reflected real-world proficiency levels.
- Score: 12.26485086816235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective communication training is essential to preparing nurses for high-quality patient care. While standardized patient (SP) simulations provide valuable experiential learning, they are often costly and inflexible. Virtual patient (VP) systems offer a scalable alternative, but most fail to adapt to the varying communication skills of trainees. In particular, when trainees respond ineffectively, VPs should escalate in hostility or become uncooperative--yet this level of adaptive interaction remains largely unsupported. To address this gap, we introduce Adaptive-VP, a VP dialogue generation framework that leverages large language models (LLMs) to dynamically adapt VP behavior based on trainee input. The framework features a pipeline for constructing clinically grounded yet flexible VP scenarios and a modular system for assessing trainee communication and adjusting VP responses in real time, while ensuring learner safety. We validated Adaptive-VP by simulating challenging patient conversations. Automated evaluation using a corpus from practicing nurses showed that our communication skill evaluation mechanism reflected real-world proficiency levels. Expert nurses further confirmed that Adaptive-VP produced more natural and realistic interactions than existing approaches, demonstrating its potential as a scalable and effective tool for nursing communication training.
Related papers
- Integrating Virtual Reality and Large Language Models for Team-Based Non-Technical Skills Training and Evaluation in the Operating Room [0.5943985865141843]
We introduce the Virtual Operating Room Team Experience (VOR), a multi-user virtual reality (VR) platform that integrates immersive team simulation with behavioral analytics.<n>VOR provides a scalable, privacy-compliant framework for objective assessment and automated, data-informed debriefing.<n>Twelve surgical professionals completed pilot sessions at the 2024S conference, rating VOR as intuitive, immersive, and valuable for developing teamwork and communication.
arXiv Detail & Related papers (2026-01-19T21:34:00Z) - An Agentic AI Framework for Training General Practitioner Student Skills [1.8865968025608468]
We introduce an agentic framework for training general practitioner student skills that unifies, evidence-based vignette generation, controlled persona-driven patient dialogue, and standards-based assessment and feedback.<n> participants reported realistic and vignette-faithful dialogue, appropriate difficulty calibration, a stable personality signal, and highly useful example-rich feedback.<n>These results support agentic separation of scenario control, interaction control, and standards-based assessment as a practical pattern for building dependable and pedagogically valuable training tools.
arXiv Detail & Related papers (2025-12-20T17:26:39Z) - Emulating Human-like Adaptive Vision for Efficient and Flexible Machine Visual Perception [93.20637973889434]
We introduce AdaptiveNN, a general framework aiming to drive a paradigm shift from 'passive' to 'active' vision models.<n> AdaptiveNN formulates visual perception as a coarse-to-fine sequential decision-making process.<n>We assess AdaptiveNN on 17 benchmarks spanning 9 tasks, including large-scale visual recognition, fine-grained discrimination, visual search, and processing images from real driving and medical scenarios.
arXiv Detail & Related papers (2025-09-18T18:25:43Z) - PAL: Designing Conversational Agents as Scalable, Cooperative Patient Simulators for Palliative-Care Training [7.181732620510345]
We present PAL (Palliative Assisted Learning-bot), a conversational system that simulates emotionally nuanced patient interactions.<n> PAL supports text and voice modalities and is designed to scaffold clinical skill-building through repeated, low-cost practice.<n>We contribute: (1) empirical evidence that large language models can support palliative communication training; (2) design insights for modality-aware, emotionally sensitive simulation tools; and (3) implications for systems that support emotional labor, cooperative learning, and AI-augmented training in high-stakes care settings.
arXiv Detail & Related papers (2025-07-02T20:09:52Z) - GEMeX-ThinkVG: Towards Thinking with Visual Grounding in Medical VQA via Reinforcement Learning [50.94508930739623]
Medical visual question answering aims to support clinical decision-making by enabling models to answer natural language questions based on medical images.<n>Current methods still suffer from limited answer reliability and poor interpretability, impairing the ability of clinicians and patients to understand and trust model-generated answers.<n>This work first proposes a Thinking with Visual Grounding dataset wherein the answer generation is decomposed into intermediate reasoning steps.<n>We introduce a novel verifiable reward mechanism for reinforcement learning to guide post-training, improving the alignment between the model's reasoning process and its final answer.
arXiv Detail & Related papers (2025-06-22T08:09:58Z) - How Managers Perceive AI-Assisted Conversational Training for Workplace Communication [6.329725088805596]
This study investigates how managers envision the role of AI in helping them improve their communication skills.<n>We designed a conversational role-play system, CommCoach, as a functional probe to understand how managers anticipate using AI to practice their communication skills.
arXiv Detail & Related papers (2025-05-20T14:51:27Z) - TRUST: An LLM-Based Dialogue System for Trauma Understanding and Structured Assessments [8.618945530676614]
This study aims to bridge the gap in mental healthcare accessibility by developing an LLM-powered dialogue system that replicates clinician behavior.<n>We introduce TRUST, a framework of cooperative LLM modules capable of conducting formal diagnostic interviews and assessments for PTSD.<n>We develop a patient simulation approach based on real-life interview transcripts to replace time-consuming and costly manual testing by clinicians.
arXiv Detail & Related papers (2025-04-30T17:58:06Z) - Automating Feedback Analysis in Surgical Training: Detection, Categorization, and Assessment [65.70317151363204]
This work introduces the first framework for reconstructing surgical dialogue from unstructured real-world recordings.<n>In surgical training, the formative verbal feedback that trainers provide to trainees during live surgeries is crucial for ensuring safety, correcting behavior immediately, and facilitating long-term skill acquisition.<n>Our framework integrates voice activity detection, speaker diarization, and automated speech recaognition, with a novel enhancement that removes hallucinations.
arXiv Detail & Related papers (2024-12-01T10:35:12Z) - Adversarial Prompt Distillation for Vision-Language Models [63.24270920122456]
Adversarial Prompt Tuning (APT) applies adversarial training during the process of prompt tuning.<n>APD is a bimodal knowledge distillation framework that enhances APT by integrating it with multi-modal knowledge transfer.<n>Extensive experiments on multiple benchmark datasets demonstrate the superiority of our APD method over the current state-of-the-art APT methods.
arXiv Detail & Related papers (2024-11-22T03:02:13Z) - Multi-Modal Self-Supervised Learning for Surgical Feedback Effectiveness Assessment [66.6041949490137]
We propose a method that integrates information from transcribed verbal feedback and corresponding surgical video to predict feedback effectiveness.
Our findings show that both transcribed feedback and surgical video are individually predictive of trainee behavior changes.
Our results demonstrate the potential of multi-modal learning to advance the automated assessment of surgical feedback.
arXiv Detail & Related papers (2024-11-17T00:13:00Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Chain-of-Interaction: Enhancing Large Language Models for Psychiatric Behavior Understanding by Dyadic Contexts [4.403408362362806]
We introduce the Chain-of-Interaction prompting method to contextualize large language models for psychiatric decision support by the dyadic interactions.
This approach enables large language models to leverage the coding scheme, patient state, and domain knowledge for patient behavioral coding.
arXiv Detail & Related papers (2024-03-20T17:47:49Z) - Validating a virtual human and automated feedback system for training
doctor-patient communication skills [3.0354760313198796]
We present the development and validation of a scalable, easily accessible, digital tool known as the Standardized Online Patient for Health Interaction Education (SOPHIE)
We found that participants who underwent SOPHIE performed significantly better than the control in overall communication, aggregate scores, empowering the patient, and showing empathy.
One day, we hope that SOPHIE will help make communication training resources more accessible by providing a scalable option to supplement existing resources.
arXiv Detail & Related papers (2023-06-27T05:23:08Z) - Automated Fidelity Assessment for Strategy Training in Inpatient
Rehabilitation using Natural Language Processing [53.096237570992294]
Strategy training is a rehabilitation approach that teaches skills to reduce disability among those with cognitive impairments following a stroke.
Standardized fidelity assessment is used to measure adherence to treatment principles.
We developed a rule-based NLP algorithm, a long-short term memory (LSTM) model, and a bidirectional encoder representation from transformers (BERT) model for this task.
arXiv Detail & Related papers (2022-09-14T15:33:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.