MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning
Models on MIMIC-IV Dataset
- URL: http://arxiv.org/abs/2102.06761v1
- Date: Fri, 12 Feb 2021 20:28:06 GMT
- Title: MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning
Models on MIMIC-IV Dataset
- Authors: Chuizheng Meng, Loc Trinh, Nan Xu, Yan Liu
- Abstract summary: We focus on MIMIC-IV (Medical Information Mart for Intensive Care, version IV), the largest publicly available healthcare dataset.
We conduct comprehensive analyses of dataset representation bias as well as interpretability and prediction fairness of deep learning models for in-hospital mortality prediction.
- Score: 15.436560770086205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent release of large-scale healthcare datasets has greatly propelled
the research of data-driven deep learning models for healthcare applications.
However, due to the nature of such deep black-boxed models, concerns about
interpretability, fairness, and biases in healthcare scenarios where human
lives are at stake call for a careful and thorough examinations of both
datasets and models. In this work, we focus on MIMIC-IV (Medical Information
Mart for Intensive Care, version IV), the largest publicly available healthcare
dataset, and conduct comprehensive analyses of dataset representation bias as
well as interpretability and prediction fairness of deep learning models for
in-hospital mortality prediction. In terms of interpretabilty, we observe that
(1) the best performing interpretability method successfully identifies
critical features for mortality prediction on various prediction models; (2)
demographic features are important for prediction. In terms of fairness, we
observe that (1) there exists disparate treatment in prescribing mechanical
ventilation among patient groups across ethnicity, gender and age; (2) all of
the studied mortality predictors are generally fair while the IMV-LSTM
(Interpretable Multi-Variable Long Short-Term Memory) model provides the most
accurate and unbiased predictions across all protected groups. We further draw
concrete connections between interpretability methods and fairness metrics by
showing how feature importance from interpretability methods can be beneficial
in quantifying potential disparities in mortality predictors.
Related papers
- Deep State-Space Generative Model For Correlated Time-to-Event Predictions [54.3637600983898]
We propose a deep latent state-space generative model to capture the interactions among different types of correlated clinical events.
Our method also uncovers meaningful insights about the latent correlations among mortality and different types of organ failures.
arXiv Detail & Related papers (2024-07-28T02:42:36Z) - Interpretable Prediction and Feature Selection for Survival Analysis [18.987678432106563]
We present DyS (pronounced dice''), a new survival analysis model that achieves both strong discrimination and interpretability.
DyS is a feature-sparse Generalized Additive Model, combining feature selection and interpretable prediction into one model.
arXiv Detail & Related papers (2024-04-23T02:36:54Z) - Explainable AI for Fair Sepsis Mortality Predictive Model [3.556697333718976]
We propose a method that learns a performance-optimized predictive model and employs the transfer learning process to produce a model with better fairness.
Our method not only aids in identifying and mitigating biases within the predictive model but also fosters trust among healthcare stakeholders.
arXiv Detail & Related papers (2024-04-19T18:56:46Z) - Evaluating the Fairness of the MIMIC-IV Dataset and a Baseline
Algorithm: Application to the ICU Length of Stay Prediction [65.268245109828]
This paper uses the MIMIC-IV dataset to examine the fairness and bias in an XGBoost binary classification model predicting the ICU length of stay.
The research reveals class imbalances in the dataset across demographic attributes and employs data preprocessing and feature extraction.
The paper concludes with recommendations for fairness-aware machine learning techniques for mitigating biases and the need for collaborative efforts among healthcare professionals and data scientists.
arXiv Detail & Related papers (2023-12-31T16:01:48Z) - A Knowledge Distillation Approach for Sepsis Outcome Prediction from
Multivariate Clinical Time Series [2.621671379723151]
We use knowledge distillation via constrained variational inference to distill the knowledge of a powerful "teacher" neural network model.
We train a "student" latent variable model to learn interpretable hidden state representations to achieve high predictive performance for sepsis outcome prediction.
arXiv Detail & Related papers (2023-11-16T05:06:51Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - Combining Graph Neural Networks and Spatio-temporal Disease Models to
Predict COVID-19 Cases in Germany [0.0]
Several experts have called for the necessity to account for human mobility to explain the spread of COVID-19.
Most statistical or epidemiological models cannot directly incorporate unstructured data sources, including data that may encode human mobility.
We propose a trade-off between both research directions and present a novel learning approach that combines the advantages of statistical regression and machine learning models.
arXiv Detail & Related papers (2021-01-03T16:39:00Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z) - A General Framework for Survival Analysis and Multi-State Modelling [70.31153478610229]
We use neural ordinary differential equations as a flexible and general method for estimating multi-state survival models.
We show that our model exhibits state-of-the-art performance on popular survival data sets and demonstrate its efficacy in a multi-state setting.
arXiv Detail & Related papers (2020-06-08T19:24:54Z) - ISeeU2: Visually Interpretable ICU mortality prediction using deep
learning and free-text medical notes [0.0]
We show a Deep Learning model trained on MIMIC-III to predict mortality using raw nursing notes, together with visual explanations for word importance.
Our model reaches a ROC of 0.8629, outperforming the traditional SAPS-II score and providing enhanced interpretability when compared with similar Deep Learning approaches.
arXiv Detail & Related papers (2020-05-19T08:30:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.