Directive Explanations for Monitoring the Risk of Diabetes Onset:
Introducing Directive Data-Centric Explanations and Combinations to Support
What-If Explorations
- URL: http://arxiv.org/abs/2302.10671v1
- Date: Tue, 21 Feb 2023 13:40:16 GMT
- Title: Directive Explanations for Monitoring the Risk of Diabetes Onset:
Introducing Directive Data-Centric Explanations and Combinations to Support
What-If Explorations
- Authors: Aditya Bhattacharya, Jeroen Ooge, Gregor Stiglic, Katrien Verbert
- Abstract summary: This paper presents an explanation dashboard that predicts the risk of diabetes onset.
It explains those predictions with data-centric, feature-importance, and example-based explanations.
We conducted a study with 11 healthcare experts and a mixed-methods study with 45 healthcare experts and 51 diabetic patients.
- Score: 1.7109770736915972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable artificial intelligence is increasingly used in machine learning
(ML) based decision-making systems in healthcare. However, little research has
compared the utility of different explanation methods in guiding healthcare
experts for patient care. Moreover, it is unclear how useful, understandable,
actionable and trustworthy these methods are for healthcare experts, as they
often require technical ML knowledge. This paper presents an explanation
dashboard that predicts the risk of diabetes onset and explains those
predictions with data-centric, feature-importance, and example-based
explanations. We designed an interactive dashboard to assist healthcare
experts, such as nurses and physicians, in monitoring the risk of diabetes
onset and recommending measures to minimize risk. We conducted a qualitative
study with 11 healthcare experts and a mixed-methods study with 45 healthcare
experts and 51 diabetic patients to compare the different explanation methods
in our dashboard in terms of understandability, usefulness, actionability, and
trust. Results indicate that our participants preferred our representation of
data-centric explanations that provide local explanations with a global
overview over other methods. Therefore, this paper highlights the importance of
visually directive data-centric explanation method for assisting healthcare
experts to gain actionable insights from patient health records. Furthermore,
we share our design implications for tailoring the visual representation of
different explanation methods for healthcare experts.
Related papers
- Advice for Diabetes Self-Management by ChatGPT Models: Challenges and Recommendations [4.321186293298159]
We evaluate the responses of ChatGPT versions 3.5 and 4 to diabetes patient queries.
Our findings reveal discrepancies in accuracy and embedded biases.
We propose a commonsense evaluation layer for prompt evaluation and incorporating disease-specific external memory.
arXiv Detail & Related papers (2025-01-14T08:32:16Z) - Natural Language-Assisted Multi-modal Medication Recommendation [97.07805345563348]
We introduce the Natural Language-Assisted Multi-modal Medication Recommendation(NLA-MMR)
The NLA-MMR is a multi-modal alignment framework designed to learn knowledge from the patient view and medication view jointly.
In this vein, we employ pretrained language models(PLMs) to extract in-domain knowledge regarding patients and medications.
arXiv Detail & Related papers (2025-01-13T09:51:50Z) - Patient-Centric Knowledge Graphs: A Survey of Current Methods,
Challenges, and Applications [2.913761513290171]
Patient-Centric Knowledge Graphs (PCKGs) represent an important shift in healthcare that focuses on individualized patient care.
PCKGs integrate various types of health data to provide healthcare professionals with a comprehensive understanding of a patient's health.
This literature review explores the methodologies, challenges, and opportunities associated with PCKGs.
arXiv Detail & Related papers (2024-02-20T00:07:55Z) - Designing Interpretable ML System to Enhance Trust in Healthcare: A Systematic Review to Proposed Responsible Clinician-AI-Collaboration Framework [13.215318138576713]
The paper reviews interpretable AI processes, methods, applications, and the challenges of implementation in healthcare.
It aims to foster a comprehensive understanding of the crucial role of a robust interpretability approach in healthcare.
arXiv Detail & Related papers (2023-11-18T12:29:18Z) - SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization [50.01382938451978]
We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
arXiv Detail & Related papers (2023-03-23T04:47:46Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Predicting Patient Readmission Risk from Medical Text via Knowledge
Graph Enhanced Multiview Graph Convolution [67.72545656557858]
We propose a new method that uses medical text of Electronic Health Records for prediction.
We represent discharge summaries of patients with multiview graphs enhanced by an external knowledge graph.
Experimental results prove the effectiveness of our method, yielding state-of-the-art performance.
arXiv Detail & Related papers (2021-12-19T01:45:57Z) - Explainable Deep Learning in Healthcare: A Methodological Survey from an
Attribution View [36.025217954247125]
We introduce the methods for interpretability in depth and comprehensively as a methodological reference for future researchers or clinical practitioners.
We discuss how these methods have been adapted and applied to healthcare problems and how they can help physicians better understand these data-driven technologies.
arXiv Detail & Related papers (2021-12-05T17:12:53Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - MET: Multimodal Perception of Engagement for Telehealth [52.54282887530756]
We present MET, a learning-based algorithm for perceiving a human's level of engagement from videos.
We release a new dataset, MEDICA, for mental health patient engagement detection.
arXiv Detail & Related papers (2020-11-17T15:18:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.