Self-explaining Neural Network with Plausible Explanations
- URL: http://arxiv.org/abs/2110.04598v1
- Date: Sat, 9 Oct 2021 15:32:17 GMT
- Title: Self-explaining Neural Network with Plausible Explanations
- Authors: Sayantan Kumar, Sean C. Yu, Andrew Michelson, Philip R.O. Payne
- Abstract summary: We propose a novel, self-explaining neural network for longitudinal in-hospital mortality prediction.
We use domain-knowledge driven Sequential Organ Failure Assessment (SOFA) organ-specific scores as the atomic units of explanation.
Our results provide interesting insights into how each of the SOFA organ scores contribute to mortality at different timesteps within longitudinal patient trajectory.
- Score: 2.724141845301679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explaining the predictions of complex deep learning models, often referred to
as black boxes, is critical in high-stakes domains like healthcare. However,
post-hoc model explanations often are not understandable by clinicians and are
difficult to integrate into clinical workflow. Further, while most explainable
models use individual clinical variables as units of explanation, human
understanding often rely on higher-level concepts or feature representations.
In this paper, we propose a novel, self-explaining neural network for
longitudinal in-hospital mortality prediction using domain-knowledge driven
Sequential Organ Failure Assessment (SOFA) organ-specific scores as the atomic
units of explanation. We also design a novel procedure to quantitatively
validate the model explanations against gold standard discharge diagnosis
information of patients. Our results provide interesting insights into how each
of the SOFA organ scores contribute to mortality at different timesteps within
longitudinal patient trajectory.
Related papers
- MERA: Multimodal and Multiscale Self-Explanatory Model with Considerably Reduced Annotation for Lung Nodule Diagnosis [6.323883478440015]
Lung cancer, a leading cause of cancer-related deaths globally, emphasises the importance of early detection for better patient outcomes.
Despite Explainable Artificial Intelligence (XAI) advances, many existing systems struggle providing clear, comprehensive explanations.
This study introduces MERA, a Multimodal and Multiscale self-Explanatory model designed for lung nodule diagnosis.
arXiv Detail & Related papers (2025-04-27T20:48:34Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Evaluation of Human-Understandability of Global Model Explanations using
Decision Tree [8.263545324859969]
We generate model explanations that are narrative, patient-specific and global.
We find a strong individual preference for a specific type of explanation.
This guides the design of health informatics systems that are both trustworthy and actionable.
arXiv Detail & Related papers (2023-09-18T16:30:14Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Review of Disentanglement Approaches for Medical Applications -- Towards
Solving the Gordian Knot of Generative Models in Healthcare [3.5586630313792513]
We give a comprehensive overview of popular generative models, like Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs) and Flow-based Models.
After introducing the theoretical frameworks, we give an overview of recent medical applications and discuss the impact and importance of disentanglement approaches for medical applications.
arXiv Detail & Related papers (2022-03-21T17:06:22Z) - Using Causal Analysis for Conceptual Deep Learning Explanation [11.552000005640203]
An ideal explanation resembles the decision-making process of a domain expert.
We take advantage of radiology reports accompanying chest X-ray images to define concepts.
We construct a low-depth decision tree to translate all the discovered concepts into a straightforward decision rule.
arXiv Detail & Related papers (2021-07-10T00:01:45Z) - Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease
Progression [71.7560927415706]
latent hybridisation model (LHM) integrates a system of expert-designed ODEs with machine-learned Neural ODEs to fully describe the dynamics of the system.
We evaluate LHM on synthetic data as well as real-world intensive care data of COVID-19 patients.
arXiv Detail & Related papers (2021-06-05T11:42:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.