XAI for In-hospital Mortality Prediction via Multimodal ICU Data
- URL: http://arxiv.org/abs/2312.17624v1
- Date: Fri, 29 Dec 2023 14:28:04 GMT
- Title: XAI for In-hospital Mortality Prediction via Multimodal ICU Data
- Authors: Xingqiao Li, Jindong Gu, Zhiyong Wang, Yancheng Yuan, Bo Du, and
Fengxiang He
- Abstract summary: We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
- Score: 57.73357047856416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting in-hospital mortality for intensive care unit (ICU) patients is
key to final clinical outcomes. AI has shown advantaged accuracy but suffers
from the lack of explainability. To address this issue, this paper proposes an
eXplainable Multimodal Mortality Predictor (X-MMP) approaching an efficient,
explainable AI solution for predicting in-hospital mortality via multimodal ICU
data. We employ multimodal learning in our framework, which can receive
heterogeneous inputs from clinical data and make decisions. Furthermore, we
introduce an explainable method, namely Layer-Wise Propagation to Transformer,
as a proper extension of the LRP method to Transformers, producing explanations
over multimodal inputs and revealing the salient features attributed to
prediction. Moreover, the contribution of each modality to clinical outcomes
can be visualized, assisting clinicians in understanding the reasoning behind
decision-making. We construct a multimodal dataset based on MIMIC-III and
MIMIC-III Waveform Database Matched Subset. Comprehensive experiments on
benchmark datasets demonstrate that our proposed framework can achieve
reasonable interpretation with competitive prediction accuracy. In particular,
our framework can be easily transferred to other clinical tasks, which
facilitates the discovery of crucial factors in healthcare research.
Related papers
- Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval [61.70489848327436]
KARE is a novel framework that integrates knowledge graph (KG) community-level retrieval with large language models (LLMs) reasoning.
Extensive experiments demonstrate that KARE outperforms leading models by up to 10.8-15.0% on MIMIC-III and 12.6-12.7% on MIMIC-IV for mortality and readmission predictions.
arXiv Detail & Related papers (2024-10-06T18:46:28Z) - DrFuse: Learning Disentangled Representation for Clinical Multi-Modal
Fusion with Missing Modality and Modal Inconsistency [18.291267748113142]
We propose DrFuse to achieve effective clinical multi-modal fusion.
We address the missing modality issue by disentangling the features shared across modalities and those unique within each modality.
We validate the proposed method using real-world large-scale datasets, MIMIC-IV and MIMIC-CXR.
arXiv Detail & Related papers (2024-03-10T12:41:34Z) - Multimodal Clinical Trial Outcome Prediction with Large Language Models [30.201189349890267]
We propose a multimodal mixture-of-experts (LIFTED) approach for clinical trial outcome prediction.
LIFTED unifies different modality data by transforming them into natural language descriptions.
Then, LIFTED constructs unified noise-resilient encoders to extract information from modal-specific language descriptions.
arXiv Detail & Related papers (2024-02-09T16:18:38Z) - Cross-modality Attention-based Multimodal Fusion for Non-small Cell Lung
Cancer (NSCLC) Patient Survival Prediction [0.6476298550949928]
We propose a cross-modality attention-based multimodal fusion pipeline designed to integrate modality-specific knowledge for patient survival prediction in non-small cell lung cancer (NSCLC)
Compared with single modality, which achieved c-index of 0.5772 and 0.5885 using solely tissue image data or RNA-seq data, respectively, the proposed fusion approach achieved c-index 0.6587 in our experiment.
arXiv Detail & Related papers (2023-08-18T21:42:52Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization [50.01382938451978]
We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
arXiv Detail & Related papers (2023-03-23T04:47:46Z) - Multimodal Explainability via Latent Shift applied to COVID-19 stratification [0.7831774233149619]
We present a deep architecture, which jointly learns modality reconstructions and sample classifications.
We validate our approach in the context of COVID-19 pandemic using the AIforCOVID dataset.
arXiv Detail & Related papers (2022-12-28T20:07:43Z) - MRI-based Alzheimer's disease prediction via distilling the knowledge in
multi-modal data [0.0]
We propose a multi-modal multi-instance distillation scheme, which aims to distill the knowledge learned from multi-modal data to an MRI-based network for MCI conversion prediction.
To our best knowledge, this is the first study that attempts to improve an MRI-based prediction model by leveraging extra supervision distilled from multi-modal information.
arXiv Detail & Related papers (2021-04-08T09:06:39Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis [50.15205065710629]
Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making.
We propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis.
arXiv Detail & Related papers (2021-01-27T13:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.