ChatGPT-HealthPrompt. Harnessing the Power of XAI in Prompt-Based
Healthcare Decision Support using ChatGPT
- URL: http://arxiv.org/abs/2308.09731v1
- Date: Thu, 17 Aug 2023 20:50:46 GMT
- Title: ChatGPT-HealthPrompt. Harnessing the Power of XAI in Prompt-Based
Healthcare Decision Support using ChatGPT
- Authors: Fatemeh Nazary, Yashar Deldjoo, and Tommaso Di Noia
- Abstract summary: This study presents an innovative approach to the application of large language models (LLMs) in clinical decision-making, focusing on OpenAI's ChatGPT.
Our approach introduces the use of contextual prompts-strategically designed to include task description, feature description, and crucially, integration of domain knowledge-for high-quality binary classification tasks even in data-scarce scenarios.
- Score: 15.973406739758856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents an innovative approach to the application of large
language models (LLMs) in clinical decision-making, focusing on OpenAI's
ChatGPT. Our approach introduces the use of contextual prompts-strategically
designed to include task description, feature description, and crucially,
integration of domain knowledge-for high-quality binary classification tasks
even in data-scarce scenarios. The novelty of our work lies in the utilization
of domain knowledge, obtained from high-performing interpretable ML models, and
its seamless incorporation into prompt design. By viewing these ML models as
medical experts, we extract key insights on feature importance to aid in
decision-making processes. This interplay of domain knowledge and AI holds
significant promise in creating a more insightful diagnostic tool.
Additionally, our research explores the dynamics of zero-shot and few-shot
prompt learning based on LLMs. By comparing the performance of OpenAI's ChatGPT
with traditional supervised ML models in different data conditions, we aim to
provide insights into the effectiveness of prompt engineering strategies under
varied data availability. In essence, this paper bridges the gap between AI and
healthcare, proposing a novel methodology for LLMs application in clinical
decision support systems. It highlights the transformative potential of
effective prompt design, domain knowledge integration, and flexible learning
approaches in enhancing automated decision-making.
Related papers
- Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - XCoOp: Explainable Prompt Learning for Computer-Aided Diagnosis via Concept-guided Context Optimization [4.634780391920529]
We propose a novel explainable prompt learning framework that leverages medical knowledge by aligning the semantics of images, learnable prompts, and clinical concept-driven prompts.
Our framework addresses the lack of valuable concept annotations by eliciting knowledge from large language models.
Our method simultaneously achieves superior diagnostic performance, flexibility, and interpretability, shedding light on the effectiveness of foundation models in facilitating XAI.
arXiv Detail & Related papers (2024-03-14T14:02:01Z) - Enhancing Court View Generation with Knowledge Injection and Guidance [43.32071790286732]
Court View Generation (CVG) aims to generate court views based on the plaintiff claims and the fact descriptions.
PLMs have showcased their prowess in natural language generation, but their application to the complex, knowledge-intensive domain of CVG often reveals inherent limitations.
We present a novel approach, named Knowledge Injection and Guidance (KIG), designed to bolster CVG using PLMs.
To efficiently incorporate domain knowledge during the training stage, we introduce a knowledge-injected prompt encoder for prompt tuning, thereby reducing computational overhead.
arXiv Detail & Related papers (2024-03-07T09:51:11Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - Explainable artificial intelligence for Healthcare applications using
Random Forest Classifier with LIME and SHAP [0.0]
There is a pressing need to understand the computational details hidden in black box AI techniques.
The origin of explainable AI (xAI) is coined from these challenges.
This book provides an in-depth analysis of several xAI frameworks and methods.
arXiv Detail & Related papers (2023-11-09T11:43:10Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - Are Large Language Models Ready for Healthcare? A Comparative Study on
Clinical Language Understanding [12.128991867050487]
Large language models (LLMs) have made significant progress in various domains, including healthcare.
In this study, we evaluate state-of-the-art LLMs within the realm of clinical language understanding tasks.
arXiv Detail & Related papers (2023-04-09T16:31:47Z) - An Interactive Interpretability System for Breast Cancer Screening with
Deep Learning [11.28741778902131]
We propose an interactive system to take advantage of state-of-the-art interpretability techniques to assist radiologists with breast cancer screening.
Our system integrates a deep learning model into the radiologists' workflow and provides novel interactions to promote understanding of the model's decision-making process.
arXiv Detail & Related papers (2022-09-30T02:19:49Z) - VBridge: Connecting the Dots Between Features, Explanations, and Data
for Healthcare Models [85.4333256782337]
VBridge is a visual analytics tool that seamlessly incorporates machine learning explanations into clinicians' decision-making workflow.
We identified three key challenges, including clinicians' unfamiliarity with ML features, lack of contextual information, and the need for cohort-level evidence.
We demonstrated the effectiveness of VBridge through two case studies and expert interviews with four clinicians.
arXiv Detail & Related papers (2021-08-04T17:34:13Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual
Embeddings Using the Unified Medical Language System Metathesaurus [73.86656026386038]
We introduce UmlsBERT, a contextual embedding model that integrates domain knowledge during the pre-training process.
By applying these two strategies, UmlsBERT can encode clinical domain knowledge into word embeddings and outperform existing domain-specific models.
arXiv Detail & Related papers (2020-10-20T15:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.