A General-purpose AI Avatar in Healthcare
- URL: http://arxiv.org/abs/2401.12981v1
- Date: Wed, 10 Jan 2024 03:44:15 GMT
- Title: A General-purpose AI Avatar in Healthcare
- Authors: Nicholas Yan, Gil Alterovitz
- Abstract summary: This paper focuses on the role of chatbots in healthcare and explores the use of avatars to make AI interactions more appealing to patients.
A framework of a general-purpose AI avatar application is demonstrated by using a three-category prompt dictionary and prompt improvement mechanism.
A two-phase approach is suggested to fine-tune a general-purpose AI language model and create different AI avatars to discuss medical issues with users.
- Score: 1.5081825869395544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in machine learning and natural language processing have
led to the rapid development of artificial intelligence (AI) as a valuable tool
in the healthcare industry. Using large language models (LLMs) as
conversational agents or chatbots has the potential to assist doctors in
diagnosing patients, detecting early symptoms of diseases, and providing health
advice to patients. This paper focuses on the role of chatbots in healthcare
and explores the use of avatars to make AI interactions more appealing to
patients. A framework of a general-purpose AI avatar application is
demonstrated by using a three-category prompt dictionary and prompt improvement
mechanism. A two-phase approach is suggested to fine-tune a general-purpose AI
language model and create different AI avatars to discuss medical issues with
users. Prompt engineering enhances the chatbot's conversational abilities and
personality traits, fostering a more human-like interaction with patients.
Ultimately, the injection of personality into the chatbot could potentially
increase patient engagement. Future directions for research include
investigating ways to improve chatbots' understanding of context and ensuring
the accuracy of their outputs through fine-tuning with specialized medical data
sets.
Related papers
- Assessing Empathy in Large Language Models with Real-World Physician-Patient Interactions [9.327472312657392]
The integration of Large Language Models (LLMs) into the healthcare domain has the potential to significantly enhance patient care and support.
This study investigates the question Can ChatGPT respond with a greater degree of empathy than those typically offered by physicians?
We collect a de-identified dataset of patient messages and physician responses from Mayo Clinic and generate alternative replies using ChatGPT.
arXiv Detail & Related papers (2024-05-26T01:58:57Z) - How Reliable AI Chatbots are for Disease Prediction from Patient Complaints? [0.0]
This study examines the reliability of AI chatbots, specifically GPT 4.0, Claude 3 Opus, and Gemini Ultra 1.0, in predicting diseases from patient complaints in the emergency department.
Results suggest that GPT 4.0 achieves high accuracy with increased few-shot data, while Gemini Ultra 1.0 performs well with fewer examples, and Claude 3 Opus maintains consistent performance.
arXiv Detail & Related papers (2024-05-21T22:00:13Z) - The impact of responding to patient messages with large language model
assistance [4.243020918808522]
Documentation burden is a major contributor to clinician burnout.
Many hospitals are actively integrating such systems into electronic medical record systems.
We are the first to examine the utility of large language models in assisting clinicians draft responses to patient questions.
arXiv Detail & Related papers (2023-10-26T18:03:46Z) - Assistive Chatbots for healthcare: a succinct review [0.0]
The focus on AI-enabled technology is because of its potential for enhancing the quality of human-machine interaction.
There is a lack of trust on this technology regarding patient safety and data protection.
Patients have expressed dissatisfaction with Natural Language Processing skills.
arXiv Detail & Related papers (2023-08-08T10:35:25Z) - Towards Healthy AI: Large Language Models Need Therapists Too [41.86344997530743]
We define Healthy AI to be safe, trustworthy and ethical.
We present the SafeguardGPT framework that uses psychotherapy to correct for these harmful behaviors.
arXiv Detail & Related papers (2023-04-02T00:39:12Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.