Can Attention Be Used to Explain EHR-Based Mortality Prediction Tasks: A
Case Study on Hemorrhagic Stroke
- URL: http://arxiv.org/abs/2308.05110v1
- Date: Fri, 4 Aug 2023 04:28:07 GMT
- Title: Can Attention Be Used to Explain EHR-Based Mortality Prediction Tasks: A
Case Study on Hemorrhagic Stroke
- Authors: Qizhang Feng, Jiayi Yuan, Forhan Bin Emdad, Karim Hanna, Xia Hu, Zhe
He
- Abstract summary: Stroke is a significant cause of mortality and morbidity, necessitating early predictive strategies to minimize risks.
Traditional methods for evaluating patients have limited accuracy and interpretability.
This paper proposes an interpretable, attention-based transformer model for early stroke mortality prediction.
- Score: 33.08002675910282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stroke is a significant cause of mortality and morbidity, necessitating early
predictive strategies to minimize risks. Traditional methods for evaluating
patients, such as Acute Physiology and Chronic Health Evaluation (APACHE II,
IV) and Simplified Acute Physiology Score III (SAPS III), have limited accuracy
and interpretability. This paper proposes a novel approach: an interpretable,
attention-based transformer model for early stroke mortality prediction. This
model seeks to address the limitations of previous predictive models, providing
both interpretability (providing clear, understandable explanations of the
model) and fidelity (giving a truthful explanation of the model's dynamics from
input to output). Furthermore, the study explores and compares fidelity and
interpretability scores using Shapley values and attention-based scores to
improve model explainability. The research objectives include designing an
interpretable attention-based transformer model, evaluating its performance
compared to existing models, and providing feature importance derived from the
model.
Related papers
- PRECISe : Prototype-Reservation for Explainable Classification under Imbalanced and Scarce-Data Settings [0.0]
PRECISe is an explainable-by-design model meticulously constructed to address all three challenges.
PreCISe outperforms the current state-of-the-art methods on data efficient generalization to minority classes.
Case study is presented to highlight the model's ability to produce easily interpretable predictions.
arXiv Detail & Related papers (2024-08-11T12:05:32Z) - Deep State-Space Generative Model For Correlated Time-to-Event Predictions [54.3637600983898]
We propose a deep latent state-space generative model to capture the interactions among different types of correlated clinical events.
Our method also uncovers meaningful insights about the latent correlations among mortality and different types of organ failures.
arXiv Detail & Related papers (2024-07-28T02:42:36Z) - A Comparative Analysis of Machine Learning Models for Early Detection of
Hospital-Acquired Infections [0.0]
Infection Risk Index (IRI) and the Ventilator-Associated Pneumonia (VAP) prediction model were compared.
The IRI model was built to predict all HAIs, whereas the VAP model identifies patients at risk of developing ventilator-associated pneumonia.
arXiv Detail & Related papers (2023-11-15T19:36:12Z) - Interpretable Survival Analysis for Heart Failure Risk Prediction [50.64739292687567]
We propose a novel survival analysis pipeline that is both interpretable and competitive with state-of-the-art survival models.
Our pipeline achieves state-of-the-art performance and provides interesting and novel insights about risk factors for heart failure.
arXiv Detail & Related papers (2023-10-24T02:56:05Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - Explainability of Traditional and Deep Learning Models on Longitudinal
Healthcare Records [0.0]
Rigorous evaluation of explainability is often missing, as comparisons between models and various explainability methods have not been well-studied.
Our work is one of the first to evaluate explainability performance between and within traditional (XGBoost) and deep learning (LSTM with Attention) models on both a global and individual per-prediction level.
arXiv Detail & Related papers (2022-11-22T04:39:17Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning
Models on MIMIC-IV Dataset [15.436560770086205]
We focus on MIMIC-IV (Medical Information Mart for Intensive Care, version IV), the largest publicly available healthcare dataset.
We conduct comprehensive analyses of dataset representation bias as well as interpretability and prediction fairness of deep learning models for in-hospital mortality prediction.
arXiv Detail & Related papers (2021-02-12T20:28:06Z) - Building Deep Learning Models to Predict Mortality in ICU Patients [0.0]
We propose several deep learning models using the same features as the SAPS II score.
Several experiments have been conducted based on the well known clinical dataset Medical Information Mart for Intensive Care III.
arXiv Detail & Related papers (2020-12-11T16:27:04Z) - A General Framework for Survival Analysis and Multi-State Modelling [70.31153478610229]
We use neural ordinary differential equations as a flexible and general method for estimating multi-state survival models.
We show that our model exhibits state-of-the-art performance on popular survival data sets and demonstrate its efficacy in a multi-state setting.
arXiv Detail & Related papers (2020-06-08T19:24:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.