Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
They be Trusted?
- URL: http://arxiv.org/abs/2306.11985v1
- Date: Wed, 21 Jun 2023 02:29:30 GMT
- Title: Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
They be Trusted?
- Authors: Aida Brankovic, David Cook, Jessica Rahman, Wenjie Huang, Sankalp
Khanna
- Abstract summary: The absence of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms.
This study evaluates two popular XAI methods used for explaining predictive models in the healthcare context.
- Score: 2.0089256058364358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The absence of transparency and explainability hinders the clinical adoption
of Machine learning (ML) algorithms. Although various methods of explainable
artificial intelligence (XAI) have been suggested, there is a lack of
literature that delves into their practicality and assesses them based on
criteria that could foster trust in clinical environments. To address this gap
this study evaluates two popular XAI methods used for explaining predictive
models in the healthcare context in terms of whether they (i) generate
domain-appropriate representation, i.e. coherent with respect to the
application task, (ii) impact clinical workflow and (iii) are consistent. To
that end, explanations generated at the cohort and patient levels were
analysed. The paper reports the first benchmarking of the XAI methods applied
to risk prediction models obtained by evaluating the concordance between
generated explanations and the trigger of a future clinical deterioration
episode recorded by the data collection system. We carried out an analysis
using two Electronic Medical Records (EMR) datasets sourced from Australian
major hospitals. The findings underscore the limitations of state-of-the-art
XAI methods in the clinical context and their potential benefits. We discuss
these limitations and contribute to the theoretical development of trustworthy
XAI solutions where clinical decision support guides the choice of intervention
by suggesting the pattern or drivers for clinical deterioration in the future.
Related papers
- Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval [61.70489848327436]
KARE is a novel framework that integrates knowledge graph (KG) community-level retrieval with large language models (LLMs) reasoning.
Extensive experiments demonstrate that KARE outperforms leading models by up to 10.8-15.0% on MIMIC-III and 12.6-12.7% on MIMIC-IV for mortality and readmission predictions.
arXiv Detail & Related papers (2024-10-06T18:46:28Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Elucidating Discrepancy in Explanations of Predictive Models Developed
using EMR [2.1561701531034414]
Lack of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms.
This study applies current state-of-the-art explainability methods to clinical decision support algorithms developed for Electronic Medical Records (EMR) data.
arXiv Detail & Related papers (2023-11-28T10:13:31Z) - Deciphering knee osteoarthritis diagnostic features with explainable
artificial intelligence: A systematic review [4.918419052486409]
Existing artificial intelligence models for diagnosing knee osteoarthritis (OA) have faced criticism for their lack of transparency and interpretability.
Recently, explainable artificial intelligence (XAI) has emerged as a specialized technique that can provide confidence in the model's prediction.
This paper presents the first survey of XAI techniques used for knee OA diagnosis.
arXiv Detail & Related papers (2023-08-18T08:23:47Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Context-dependent Explainability and Contestability for Trustworthy
Medical Artificial Intelligence: Misclassification Identification of
Morbidity Recognition Models in Preterm Infants [0.0]
Explainable AI (XAI) aims to address this requirement by clarifying AI reasoning to support the end users.
We built our methodology on three main pillars: decomposing the feature set by leveraging clinical context latent space, assessing the clinical association of global explanations, and Latent Space Similarity (LSS) based local explanations.
arXiv Detail & Related papers (2022-12-17T07:59:09Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - Leveraging Clinical Context for User-Centered Explainability: A Diabetes
Use Case [4.520155732176645]
We implement a proof-of-concept (POC) in type-2 diabetes (T2DM) use case where we assess the risk of chronic kidney disease (CKD)
Within the POC, we include risk prediction models for CKD, post-hoc explainers of the predictions, and other natural-language modules.
Our POC approach covers multiple knowledge sources and clinical scenarios, blends knowledge to explain data and predictions to PCPs, and received an enthusiastic response from our medical expert.
arXiv Detail & Related papers (2021-07-06T02:44:40Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.