Using Explainable AI to Cross-Validate Socio-economic Disparities Among
Covid-19 Patient Mortality
- URL: http://arxiv.org/abs/2302.08605v1
- Date: Thu, 16 Feb 2023 22:09:05 GMT
- Title: Using Explainable AI to Cross-Validate Socio-economic Disparities Among
Covid-19 Patient Mortality
- Authors: Li Shi, Redoan Rahman, Esther Melamed, Jacek Gwizdka, Justin F.
Rousseau, Ying Ding
- Abstract summary: This paper applies XAI methods to investigate the socioeconomic disparities in COVID patient mortality.
XAI models reveal that Medicare financial class, older age, and gender have high impact on the mortality prediction.
- Score: 7.897897974226182
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper applies eXplainable Artificial Intelligence (XAI) methods to
investigate the socioeconomic disparities in COVID patient mortality. An
Extreme Gradient Boosting (XGBoost) prediction model is built based on a
de-identified Austin area hospital dataset to predict the mortality of COVID-19
patients. We apply two XAI methods, Shapley Additive exPlanations (SHAP) and
Locally Interpretable Model Agnostic Explanations (LIME), to compare the global
and local interpretation of feature importance. This paper demonstrates the
advantages of using XAI which shows the feature importance and decisive
capability. Furthermore, we use the XAI methods to cross-validate their
interpretations for individual patients. The XAI models reveal that Medicare
financial class, older age, and gender have high impact on the mortality
prediction. We find that LIME local interpretation does not show significant
differences in feature importance comparing to SHAP, which suggests pattern
confirmation. This paper demonstrates the importance of XAI methods in
cross-validation of feature attributions.
Related papers
- XGBoost-Based Prediction of ICU Mortality in Sepsis-Associated Acute Kidney Injury Patients Using MIMIC-IV Database with Validation from eICU Database [0.0]
Sepsis-Associated Acute Kidney Injury (SA-AKI) leads to high mortality in intensive care.
This study develops machine learning models to predict Intensive Care Unit (ICU) mortality in SA-AKI patients.
arXiv Detail & Related papers (2025-02-25T08:49:22Z) - IBO: Inpainting-Based Occlusion to Enhance Explainable Artificial Intelligence Evaluation in Histopathology [1.9440228513607511]
Inpainting-Based Occlusion (IBO) is a novel strategy that utilizes a Denoising Diffusion Probabilistic Model to inpaint occluded regions.
We evaluate IBO through two phases: first, by assessing perceptual similarity using the Learned Perceptual Image Patch Similarity (LPIPS) metric, and second, by quantifying the impact on model predictions through Area Under the Curve (AUC) analysis.
arXiv Detail & Related papers (2024-08-29T09:57:55Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
They be Trusted? [2.0089256058364358]
The absence of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms.
This study evaluates two popular XAI methods used for explaining predictive models in the healthcare context.
arXiv Detail & Related papers (2023-06-21T02:29:30Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Analysis and Evaluation of Explainable Artificial Intelligence on
Suicide Risk Assessment [32.04382293817763]
This study investigates the effectiveness of Explainable Artificial Intelligence (XAI) techniques in predicting suicide risks.
Data augmentation techniques and ML models are utilized to predict the associated risk.
Patients with good incomes, respected occupations, and university education have the least risk.
arXiv Detail & Related papers (2023-03-09T05:11:46Z) - Towards Trust of Explainable AI in Thyroid Nodule Diagnosis [0.0]
We apply state-of-the-art eXplainable artificial intelligence (XAI) methods to explain the prediction of the black-box AI models in the thyroid nodule diagnosis application.
We propose new statistic-based XAI methods, namely Kernel Density Estimation and Density map, to explain the case of no nodule detected.
arXiv Detail & Related papers (2023-03-08T17:18:13Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Explaining COVID-19 and Thoracic Pathology Model Predictions by
Identifying Informative Input Features [47.45835732009979]
Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays.
Features attribution methods identify the importance of input features for the output prediction.
We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-independent feature importance metrics on NIH Chest X-ray8 and BrixIA datasets.
arXiv Detail & Related papers (2021-04-01T11:42:39Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Gradient Boosting on Decision Trees for Mortality Prediction in
Transcatheter Aortic Valve Implantation [5.050648346154715]
Current prognostic risk scores in cardiac surgery are based on statistics and do not yet benefit from machine learning.
This research aims to create a machine learning model to predict one-year mortality of a patient after TAVI.
arXiv Detail & Related papers (2020-01-08T10:04:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.