Explainable AI for Malnutrition Risk Prediction from m-Health and
Clinical Data
- URL: http://arxiv.org/abs/2305.19636v1
- Date: Wed, 31 May 2023 08:07:35 GMT
- Title: Explainable AI for Malnutrition Risk Prediction from m-Health and
Clinical Data
- Authors: Flavio Di Martino, Franca Delmastro, Cristina Dolciotti
- Abstract summary: This paper presents a novel AI framework for early and explainable malnutrition risk detection based on heterogeneous m-health data.
We performed an extensive model evaluation including both subject-independent and personalised predictions.
We also investigated several benchmark XAI methods to extract global model explanations.
- Score: 3.093890460224435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Malnutrition is a serious and prevalent health problem in the older
population, and especially in hospitalised or institutionalised subjects.
Accurate and early risk detection is essential for malnutrition management and
prevention. M-health services empowered with Artificial Intelligence (AI) may
lead to important improvements in terms of a more automatic, objective, and
continuous monitoring and assessment. Moreover, the latest Explainable AI (XAI)
methodologies may make AI decisions interpretable and trustworthy for end
users. This paper presents a novel AI framework for early and explainable
malnutrition risk detection based on heterogeneous m-health data. We performed
an extensive model evaluation including both subject-independent and
personalised predictions, and the obtained results indicate Random Forest (RF)
and Gradient Boosting as the best performing classifiers, especially when
incorporating body composition assessment data. We also investigated several
benchmark XAI methods to extract global model explanations. Model-specific
explanation consistency assessment indicates that each selected model
privileges similar subsets of the most relevant predictors, with the highest
agreement shown between SHapley Additive ExPlanations (SHAP) and feature
permutation method. Furthermore, we performed a preliminary clinical validation
to verify that the learned feature-output trends are compliant with the current
evidence-based assessment.
Related papers
- Methodological Explainability Evaluation of an Interpretable Deep Learning Model for Post-Hepatectomy Liver Failure Prediction Incorporating Counterfactual Explanations and Layerwise Relevance Propagation: A Prospective In Silico Trial [13.171582596404313]
We developed a variational autoencoder-multilayer perceptron (VAE-MLP) model for preoperative PHLF prediction.
This model integrated counterfactuals and layerwise relevance propagation (LRP) to provide insights into its decision-making mechanism.
Results from the three-track in silico clinical trial showed that clinicians' prediction accuracy and confidence increased when AI explanations were provided.
arXiv Detail & Related papers (2024-08-07T13:47:32Z) - Unmasking Dementia Detection by Masking Input Gradients: A JSM Approach
to Model Interpretability and Precision [1.5501208213584152]
We introduce an interpretable, multimodal model for Alzheimer's disease (AD) classification over its multi-stage progression, incorporating Jacobian Saliency Map (JSM) as a modality-agnostic tool.
Our evaluation including ablation study manifests the efficacy of using JSM for model debug and interpretation, while significantly enhancing model accuracy as well.
arXiv Detail & Related papers (2024-02-25T06:53:35Z) - Deployment of a Robust and Explainable Mortality Prediction Model: The
COVID-19 Pandemic and Beyond [0.59374762912328]
This study investigated the performance, explainability, and robustness of deployed artificial intelligence (AI) models in predicting mortality during the COVID-19 pandemic and beyond.
arXiv Detail & Related papers (2023-11-28T18:15:53Z) - New Epochs in AI Supervision: Design and Implementation of an Autonomous
Radiology AI Monitoring System [5.50085484902146]
We introduce novel methods for monitoring the performance of radiology AI classification models in practice.
We propose two metrics - predictive divergence and temporal stability - to be used for preemptive alerts of AI performance changes.
arXiv Detail & Related papers (2023-11-24T06:29:04Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Improvement of a Prediction Model for Heart Failure Survival through
Explainable Artificial Intelligence [0.0]
This work presents an explainability analysis and evaluation of a prediction model for heart failure survival.
The model employs a data workflow pipeline able to select the best ensemble tree algorithm as well as the best feature selection technique.
The paper's main contribution is an explainability-driven approach to select the best prediction model for HF survival based on an accuracy-explainability balance.
arXiv Detail & Related papers (2021-08-20T09:03:26Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z) - Hemogram Data as a Tool for Decision-making in COVID-19 Management:
Applications to Resource Scarcity Scenarios [62.997667081978825]
COVID-19 pandemics has challenged emergency response systems worldwide, with widespread reports of essential services breakdown and collapse of health care structure.
This work describes a machine learning model derived from hemogram exam data performed in symptomatic patients.
Proposed models can predict COVID-19 qRT-PCR results in symptomatic individuals with high accuracy, sensitivity and specificity.
arXiv Detail & Related papers (2020-05-10T01:45:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.