Transparent AI: Developing an Explainable Interface for Predicting Postoperative Complications
- URL: http://arxiv.org/abs/2404.16064v1
- Date: Thu, 18 Apr 2024 21:01:27 GMT
- Title: Transparent AI: Developing an Explainable Interface for Predicting Postoperative Complications
- Authors: Yuanfang Ren, Chirayu Tripathi, Ziyuan Guan, Ruilin Zhu, Victoria Hougha, Yingbo Ma, Zhenhong Hu, Jeremy Balch, Tyler J. Loftus, Parisa Rashidi, Benjamin Shickel, Tezcan Ozrazgat-Baslanti, Azra Bihorac,
- Abstract summary: We propose an Explainable AI (XAI) framework designed to answer five critical questions.
We incorporated various techniques such as Local Interpretable Model-agnostic Explanations (LIME)
We showcased an XAI interface prototype that adheres to this framework for predicting major postoperative complications.
- Score: 1.6609516435725236
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Given the sheer volume of surgical procedures and the significant rate of postoperative fatalities, assessing and managing surgical complications has become a critical public health concern. Existing artificial intelligence (AI) tools for risk surveillance and diagnosis often lack adequate interpretability, fairness, and reproducibility. To address this, we proposed an Explainable AI (XAI) framework designed to answer five critical questions: why, why not, how, what if, and what else, with the goal of enhancing the explainability and transparency of AI models. We incorporated various techniques such as Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), counterfactual explanations, model cards, an interactive feature manipulation interface, and the identification of similar patients to address these questions. We showcased an XAI interface prototype that adheres to this framework for predicting major postoperative complications. This initial implementation has provided valuable insights into the vast explanatory potential of our XAI framework and represents an initial step towards its clinical adoption.
Related papers
- Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks [9.93411316886105]
Self-eXplainable AI (S-XAI) incorporates explainability directly into the training process of deep learning models.
This survey presents a comprehensive review across various image modalities and clinical applications.
arXiv Detail & Related papers (2024-10-03T09:29:28Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Hypergraph-Transformer (HGT) for Interactive Event Prediction in
Laparoscopic and Robotic Surgery [50.3022015601057]
We propose a predictive neural network that is capable of understanding and predicting critical interactive aspects of surgical workflow from intra-abdominal video.
We verify our approach on established surgical datasets and applications, including the detection and prediction of action triplets.
Our results demonstrate the superiority of our approach compared to unstructured alternatives.
arXiv Detail & Related papers (2024-02-03T00:58:05Z) - Interpretable Medical Imagery Diagnosis with Self-Attentive
Transformers: A Review of Explainable AI for Health Care [2.7195102129095003]
Vision Transformers (ViT) have emerged as state-of-the-art computer vision models, benefiting from self-attention modules.
Deep-learning models are complex and are often treated as a "black box" that can cause uncertainty regarding how they operate.
This review summarises recent ViT advancements and interpretative approaches to understanding the decision-making process of ViT.
arXiv Detail & Related papers (2023-09-01T05:01:52Z) - Deciphering knee osteoarthritis diagnostic features with explainable
artificial intelligence: A systematic review [4.918419052486409]
Existing artificial intelligence models for diagnosing knee osteoarthritis (OA) have faced criticism for their lack of transparency and interpretability.
Recently, explainable artificial intelligence (XAI) has emerged as a specialized technique that can provide confidence in the model's prediction.
This paper presents the first survey of XAI techniques used for knee OA diagnosis.
arXiv Detail & Related papers (2023-08-18T08:23:47Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.