Synthetic Patients: Simulating Difficult Conversations with Multimodal Generative AI for Medical Education
- URL: http://arxiv.org/abs/2405.19941v1
- Date: Thu, 30 May 2024 11:02:08 GMT
- Title: Synthetic Patients: Simulating Difficult Conversations with Multimodal Generative AI for Medical Education
- Authors: Simon N. Chu, Alex J. Goodell,
- Abstract summary: Effective patient-centered communication is a core competency for physicians.
Both seasoned providers and medical trainees report decreased confidence in leading conversations on sensitive topics.
We present a novel educational tool designed to facilitate interactive, real-time simulations of difficult conversations in a video-based format.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Problem: Effective patient-centered communication is a core competency for physicians. However, both seasoned providers and medical trainees report decreased confidence in leading conversations on sensitive topics such as goals of care or end-of-life discussions. The significant administrative burden and the resources required to provide dedicated training in leading difficult conversations has been a long-standing problem in medical education. Approach: In this work, we present a novel educational tool designed to facilitate interactive, real-time simulations of difficult conversations in a video-based format through the use of multimodal generative artificial intelligence (AI). Leveraging recent advances in language modeling, computer vision, and generative audio, this tool creates realistic, interactive scenarios with avatars, or "synthetic patients." These synthetic patients interact with users throughout various stages of medical care using a custom-built video chat application, offering learners the chance to practice conversations with patients from diverse belief systems, personalities, and ethnic backgrounds. Outcomes: While the development of this platform demanded substantial upfront investment in labor, it offers a highly-realistic simulation experience with minimal financial investment. For medical trainees, this educational tool can be implemented within programs to simulate patient-provider conversations and can be incorporated into existing palliative care curriculum to provide a scalable, high-fidelity simulation environment for mastering difficult conversations. Next Steps: Future developments will explore enhancing the authenticity of these encounters by working with patients to incorporate their histories and personalities, as well as employing the use of AI-generated evaluations to offer immediate, constructive feedback to learners post-simulation.
Related papers
- Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health [1.8772687384996551]
Large language models (LLMs) have opened up new opportunities for transforming patient engagement in healthcare through conversational AI.
We showcase the power of LLMs in handling unstructured conversational data through four case studies.
arXiv Detail & Related papers (2024-06-19T16:02:04Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Leveraging Large Language Model as Simulated Patients for Clinical Education [18.67200160979337]
High cost of training and hiring qualified SPs limit students' access to this type of clinical training.
With the rapid development of Large Language Models (LLMs), their exceptional capabilities in conversational artificial intelligence and role-playing have been demonstrated.
We present an integrated model-agnostic framework called CureFun that harnesses the potential of LLMs in clinical medical education.
arXiv Detail & Related papers (2024-04-13T06:36:32Z) - Socially Pertinent Robots in Gerontological Healthcare [78.35311825198136]
This paper is an attempt to partially answer the question, via two waves of experiments with patients and companions in a day-care gerontological facility in Paris with a full-sized humanoid robot endowed with social and conversational interaction capabilities.
Overall, the users are receptive to this technology, especially when the robot perception and action skills are robust to environmental clutter and flexible to handle a plethora of different interactions.
arXiv Detail & Related papers (2024-04-11T08:43:37Z) - Chain-of-Interaction: Enhancing Large Language Models for Psychiatric Behavior Understanding by Dyadic Contexts [4.403408362362806]
We introduce the Chain-of-Interaction prompting method to contextualize large language models for psychiatric decision support by the dyadic interactions.
This approach enables large language models to leverage the coding scheme, patient state, and domain knowledge for patient behavioral coding.
arXiv Detail & Related papers (2024-03-20T17:47:49Z) - Benchmarking Large Language Models on Communicative Medical Coaching: a Novel System and Dataset [26.504409173684653]
We introduce "ChatCoach", a human-AI cooperative framework designed to assist medical learners in practicing their communication skills during patient consultations.
ChatCoachdifferentiates itself from conventional dialogue systems by offering a simulated environment where medical learners can practice dialogues with a patient agent, while a coach agent provides immediate, structured feedback.
We have developed a dataset specifically for evaluating Large Language Models (LLMs) within the ChatCoach framework on communicative medical coaching tasks.
arXiv Detail & Related papers (2024-02-08T10:32:06Z) - Building Emotional Support Chatbots in the Era of LLMs [64.06811786616471]
We introduce an innovative methodology that synthesizes human insights with the computational prowess of Large Language Models (LLMs)
By utilizing the in-context learning potential of ChatGPT, we generate an ExTensible Emotional Support dialogue dataset, named ExTES.
Following this, we deploy advanced tuning techniques on the LLaMA model, examining the impact of diverse training strategies, ultimately yielding an LLM meticulously optimized for emotional support interactions.
arXiv Detail & Related papers (2023-08-17T10:49:18Z) - MedNgage: A Dataset for Understanding Engagement in Patient-Nurse
Conversations [4.847266237348932]
Patients who effectively manage their symptoms often demonstrate higher levels of engagement in conversations and interventions with healthcare practitioners.
It is crucial for AI systems to understand the engagement in natural conversations between patients and practitioners to better contribute toward patient care.
We present a novel dataset (MedNgage) which consists of patient-nurse conversations about cancer symptom management.
arXiv Detail & Related papers (2023-05-31T16:06:07Z) - PLACES: Prompting Language Models for Social Conversation Synthesis [103.94325597273316]
We use a small set of expert-written conversations as in-context examples to synthesize a social conversation dataset using prompting.
We perform several thorough evaluations of our synthetic conversations compared to human-collected conversations.
arXiv Detail & Related papers (2023-02-07T05:48:16Z) - Enabling AI and Robotic Coaches for Physical Rehabilitation Therapy:
Iterative Design and Evaluation with Therapists and Post-Stroke Survivors [66.07833535962762]
Artificial intelligence (AI) and robotic coaches promise the improved engagement of patients on rehabilitation exercises through social interaction.
Previous work explored the potential of automatically monitoring exercises for AI and robotic coaches, but deployment remains a challenge.
We present our efforts on eliciting the detailed design specifications on how AI and robotic coaches could interact with and guide patient's exercises.
arXiv Detail & Related papers (2021-06-15T22:06:39Z) - Retrieval Augmentation Reduces Hallucination in Conversation [49.35235945543833]
We explore the use of neural-retrieval-in-the-loop architectures for knowledge-grounded dialogue.
We show that our best models obtain state-of-the-art performance on two knowledge-grounded conversational tasks.
arXiv Detail & Related papers (2021-04-15T16:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.