Deployment of a Robust and Explainable Mortality Prediction Model: The
COVID-19 Pandemic and Beyond
- URL: http://arxiv.org/abs/2311.17133v1
- Date: Tue, 28 Nov 2023 18:15:53 GMT
- Title: Deployment of a Robust and Explainable Mortality Prediction Model: The
COVID-19 Pandemic and Beyond
- Authors: Jacob R. Epifano, Stephen Glass, Ravi P. Ramachandran, Sharad Patel,
Aaron J. Masino, Ghulam Rasool
- Abstract summary: This study investigated the performance, explainability, and robustness of deployed artificial intelligence (AI) models in predicting mortality during the COVID-19 pandemic and beyond.
- Score: 0.59374762912328
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study investigated the performance, explainability, and robustness of
deployed artificial intelligence (AI) models in predicting mortality during the
COVID-19 pandemic and beyond. The first study of its kind, we found that
Bayesian Neural Networks (BNNs) and intelligent training techniques allowed our
models to maintain performance amidst significant data shifts. Our results
emphasize the importance of developing robust AI models capable of matching or
surpassing clinician predictions, even under challenging conditions. Our
exploration of model explainability revealed that stochastic models generate
more diverse and personalized explanations thereby highlighting the need for AI
models that provide detailed and individualized insights in real-world clinical
settings. Furthermore, we underscored the importance of quantifying uncertainty
in AI models which enables clinicians to make better-informed decisions based
on reliable predictions. Our study advocates for prioritizing implementation
science in AI research for healthcare and ensuring that AI solutions are
practical, beneficial, and sustainable in real-world clinical environments. By
addressing unique challenges and complexities in healthcare settings,
researchers can develop AI models that effectively improve clinical practice
and patient outcomes.
Related papers
- Explainable Diagnosis Prediction through Neuro-Symbolic Integration [11.842565087408449]
We use neuro-symbolic methods, specifically Logical Neural Networks (LNNs), to develop explainable models for diagnosis prediction.
Our models, particularly $M_textmulti-pathway$ and $M_textcomprehensive$, demonstrate superior performance over traditional models.
These findings highlight the potential of neuro-symbolic approaches in bridging the gap between accuracy and explainability in healthcare AI applications.
arXiv Detail & Related papers (2024-10-01T22:47:24Z) - Bayesian Kolmogorov Arnold Networks (Bayesian_KANs): A Probabilistic Approach to Enhance Accuracy and Interpretability [1.90365714903665]
This study presents a novel framework called Bayesian Kolmogorov Arnold Networks (BKANs)
BKANs combines the expressive capacity of Kolmogorov Arnold Networks with Bayesian inference.
Our method provides useful insights into prediction confidence and decision boundaries and outperforms traditional deep learning models in terms of prediction accuracy.
arXiv Detail & Related papers (2024-08-05T10:38:34Z) - Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations [12.73011921253]
This review introduces the transformative potential of generative Artificial Intelligence (AI) and foundation models, including large language models (LLMs), for health technology assessment (HTA)
We explore their applications in four critical areas, synthesis evidence, evidence generation, clinical trials and economic modeling.
Despite their promise, these technologies, while rapidly improving, are still nascent and continued careful evaluation in their applications to HTA is required.
arXiv Detail & Related papers (2024-07-09T09:25:27Z) - TrialBench: Multi-Modal Artificial Intelligence-Ready Clinical Trial Datasets [57.067409211231244]
This paper presents meticulously curated AIready datasets covering multi-modal data (e.g., drug molecule, disease code, text, categorical/numerical features) and 8 crucial prediction challenges in clinical trial design.
We provide basic validation methods for each task to ensure the datasets' usability and reliability.
We anticipate that the availability of such open-access datasets will catalyze the development of advanced AI approaches for clinical trial design.
arXiv Detail & Related papers (2024-06-30T09:13:10Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - TREEMENT: Interpretable Patient-Trial Matching via Personalized Dynamic
Tree-Based Memory Network [54.332862955411656]
Clinical trials are critical for drug development but often suffer from expensive and inefficient patient recruitment.
In recent years, machine learning models have been proposed for speeding up patient recruitment via automatically matching patients with clinical trials.
We introduce a dynamic tree-based memory network model named TREEMENT to provide accurate and interpretable patient trial matching.
arXiv Detail & Related papers (2023-07-19T12:35:09Z) - GENIE-NF-AI: Identifying Neurofibromatosis Tumors using Liquid Neural
Network (LTC) trained on AACR GENIE Datasets [0.0]
We propose an interpretable AI approach to diagnose patients with neurofibromatosis.
Our proposed approach outperformed existing models with 99.86% accuracy.
arXiv Detail & Related papers (2023-04-26T10:28:59Z) - COVID-Net Biochem: An Explainability-driven Framework to Building
Machine Learning Models for Predicting Survival and Kidney Injury of COVID-19
Patients from Clinical and Biochemistry Data [66.43957431843324]
We introduce COVID-Net Biochem, a versatile and explainable framework for constructing machine learning models.
We apply this framework to predict COVID-19 patient survival and the likelihood of developing Acute Kidney Injury during hospitalization.
arXiv Detail & Related papers (2022-04-24T07:38:37Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Improvement of a Prediction Model for Heart Failure Survival through
Explainable Artificial Intelligence [0.0]
This work presents an explainability analysis and evaluation of a prediction model for heart failure survival.
The model employs a data workflow pipeline able to select the best ensemble tree algorithm as well as the best feature selection technique.
The paper's main contribution is an explainability-driven approach to select the best prediction model for HF survival based on an accuracy-explainability balance.
arXiv Detail & Related papers (2021-08-20T09:03:26Z) - Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels [93.58854458951431]
We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
arXiv Detail & Related papers (2020-09-16T15:16:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.