Explainable AI for clinical risk prediction: a survey of concepts,
methods, and modalities
- URL: http://arxiv.org/abs/2308.08407v1
- Date: Wed, 16 Aug 2023 14:51:51 GMT
- Title: Explainable AI for clinical risk prediction: a survey of concepts,
methods, and modalities
- Authors: Munib Mesinovic, Peter Watkinson, Tingting Zhu
- Abstract summary: Review of progress in developing explainable models for clinical risk prediction.
emphasizes the need for external validation and the combination of diverse interpretability methods.
End-to-end approach to explainability in clinical risk prediction is essential for success.
- Score: 2.9404725327650767
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in AI applications to healthcare have shown incredible
promise in surpassing human performance in diagnosis and disease prognosis.
With the increasing complexity of AI models, however, concerns regarding their
opacity, potential biases, and the need for interpretability. To ensure trust
and reliability in AI systems, especially in clinical risk prediction models,
explainability becomes crucial. Explainability is usually referred to as an AI
system's ability to provide a robust interpretation of its decision-making
logic or the decisions themselves to human stakeholders. In clinical risk
prediction, other aspects of explainability like fairness, bias, trust, and
transparency also represent important concepts beyond just interpretability. In
this review, we address the relationship between these concepts as they are
often used together or interchangeably. This review also discusses recent
progress in developing explainable models for clinical risk prediction,
highlighting the importance of quantitative and clinical evaluation and
validation across multiple common modalities in clinical practice. It
emphasizes the need for external validation and the combination of diverse
interpretability methods to enhance trust and fairness. Adopting rigorous
testing, such as using synthetic datasets with known generative factors, can
further improve the reliability of explainability methods. Open access and
code-sharing resources are essential for transparency and reproducibility,
enabling the growth and trustworthiness of explainable research. While
challenges exist, an end-to-end approach to explainability in clinical risk
prediction, incorporating stakeholders from clinicians to developers, is
essential for success.
Related papers
- On the Fairness, Diversity and Reliability of Text-to-Image Generative Models [49.60774626839712]
multimodal generative models have sparked critical discussions on their fairness, reliability, and potential for misuse.
We propose an evaluation framework designed to assess model reliability through their responses to perturbations in the embedding space.
Our method lays the groundwork for detecting unreliable, bias-injected models and retrieval of bias provenance.
arXiv Detail & Related papers (2024-11-21T09:46:55Z) - Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering [51.26412822853409]
We present a novel personalized federated learning (pFL) method for medical visual question answering (VQA) models.
Our method introduces learnable prompts into a Transformer architecture to efficiently train it on diverse medical datasets without massive computational costs.
arXiv Detail & Related papers (2024-10-23T00:31:17Z) - Bayesian Kolmogorov Arnold Networks (Bayesian_KANs): A Probabilistic Approach to Enhance Accuracy and Interpretability [1.90365714903665]
This study presents a novel framework called Bayesian Kolmogorov Arnold Networks (BKANs)
BKANs combines the expressive capacity of Kolmogorov Arnold Networks with Bayesian inference.
Our method provides useful insights into prediction confidence and decision boundaries and outperforms traditional deep learning models in terms of prediction accuracy.
arXiv Detail & Related papers (2024-08-05T10:38:34Z) - Explainable AI for Fair Sepsis Mortality Predictive Model [3.556697333718976]
We propose a method that learns a performance-optimized predictive model and employs the transfer learning process to produce a model with better fairness.
Our method not only aids in identifying and mitigating biases within the predictive model but also fosters trust among healthcare stakeholders.
arXiv Detail & Related papers (2024-04-19T18:56:46Z) - Explainable AI for Malnutrition Risk Prediction from m-Health and
Clinical Data [3.093890460224435]
This paper presents a novel AI framework for early and explainable malnutrition risk detection based on heterogeneous m-health data.
We performed an extensive model evaluation including both subject-independent and personalised predictions.
We also investigated several benchmark XAI methods to extract global model explanations.
arXiv Detail & Related papers (2023-05-31T08:07:35Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - The role of explainability in creating trustworthy artificial
intelligence for health care: a comprehensive survey of the terminology,
design choices, and evaluation strategies [1.2762298148425795]
Lack of transparency is identified as one of the main barriers to implementation of AI systems in health care.
We review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems.
We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice.
arXiv Detail & Related papers (2020-07-31T09:08:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.