Context-dependent Explainability and Contestability for Trustworthy
Medical Artificial Intelligence: Misclassification Identification of
Morbidity Recognition Models in Preterm Infants
- URL: http://arxiv.org/abs/2212.08821v1
- Date: Sat, 17 Dec 2022 07:59:09 GMT
- Title: Context-dependent Explainability and Contestability for Trustworthy
Medical Artificial Intelligence: Misclassification Identification of
Morbidity Recognition Models in Preterm Infants
- Authors: Isil Guzey, Ozlem Ucar, Nukhet Aladag Ciftdemir, Betul Acunas
- Abstract summary: Explainable AI (XAI) aims to address this requirement by clarifying AI reasoning to support the end users.
We built our methodology on three main pillars: decomposing the feature set by leveraging clinical context latent space, assessing the clinical association of global explanations, and Latent Space Similarity (LSS) based local explanations.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Although machine learning (ML) models of AI achieve high performances in
medicine, they are not free of errors. Empowering clinicians to identify
incorrect model recommendations is crucial for engendering trust in medical AI.
Explainable AI (XAI) aims to address this requirement by clarifying AI
reasoning to support the end users. Several studies on biomedical imaging
achieved promising results recently. Nevertheless, solutions for models using
tabular data are not sufficient to meet the requirements of clinicians yet.
This paper proposes a methodology to support clinicians in identifying failures
of ML models trained with tabular data. We built our methodology on three main
pillars: decomposing the feature set by leveraging clinical context latent
space, assessing the clinical association of global explanations, and Latent
Space Similarity (LSS) based local explanations. We demonstrated our
methodology on ML-based recognition of preterm infant morbidities caused by
infection. The risk of mortality, lifelong disability, and antibiotic
resistance due to model failures was an open research question in this domain.
We achieved to identify misclassification cases of two models with our
approach. By contextualizing local explanations, our solution provides
clinicians with actionable insights to support their autonomy for informed
final decisions.
Related papers
- Methodological Explainability Evaluation of an Interpretable Deep Learning Model for Post-Hepatectomy Liver Failure Prediction Incorporating Counterfactual Explanations and Layerwise Relevance Propagation: A Prospective In Silico Trial [13.171582596404313]
We developed a variational autoencoder-multilayer perceptron (VAE-MLP) model for preoperative PHLF prediction.
This model integrated counterfactuals and layerwise relevance propagation (LRP) to provide insights into its decision-making mechanism.
Results from the three-track in silico clinical trial showed that clinicians' prediction accuracy and confidence increased when AI explanations were provided.
arXiv Detail & Related papers (2024-08-07T13:47:32Z) - Decoding Decision Reasoning: A Counterfactual-Powered Model for Knowledge Discovery [6.1521675665532545]
In medical imaging, discerning the rationale behind an AI model's predictions is crucial for evaluating its reliability.
We propose an explainable model that is equipped with both decision reasoning and feature identification capabilities.
By implementing our method, we can efficiently identify and visualise class-specific features leveraged by the data-driven model.
arXiv Detail & Related papers (2024-05-23T19:00:38Z) - Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer
Learning Method [0.0]
This research paper focuses on Acute Lymphoblastic Leukemia (ALL), a form of blood cancer prevalent in children and teenagers.
It proposes an automated detection approach using computer-aided diagnostic (CAD) models, leveraging deep learning techniques.
The proposed method achieved an impressive 98.38% accuracy, outperforming other tested models.
arXiv Detail & Related papers (2023-12-01T10:37:02Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
They be Trusted? [2.0089256058364358]
The absence of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms.
This study evaluates two popular XAI methods used for explaining predictive models in the healthcare context.
arXiv Detail & Related papers (2023-06-21T02:29:30Z) - Assisting clinical practice with fuzzy probabilistic decision trees [2.0999441362198907]
We propose FPT, a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice.
We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a user-friendly interface specifically designed for this purpose.
arXiv Detail & Related papers (2023-04-16T14:05:16Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - VBridge: Connecting the Dots Between Features, Explanations, and Data
for Healthcare Models [85.4333256782337]
VBridge is a visual analytics tool that seamlessly incorporates machine learning explanations into clinicians' decision-making workflow.
We identified three key challenges, including clinicians' unfamiliarity with ML features, lack of contextual information, and the need for cohort-level evidence.
We demonstrated the effectiveness of VBridge through two case studies and expert interviews with four clinicians.
arXiv Detail & Related papers (2021-08-04T17:34:13Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.