Using Interpretable Machine Learning to Predict Maternal and Fetal
Outcomes
- URL: http://arxiv.org/abs/2207.05322v1
- Date: Tue, 12 Jul 2022 05:32:26 GMT
- Title: Using Interpretable Machine Learning to Predict Maternal and Fetal
Outcomes
- Authors: Tomas M. Bosschieter, Zifei Xu, Hui Lan, Benjamin J. Lengerich, Harsha
Nori, Kristin Sitcov, Vivienne Souter, Rich Caruana
- Abstract summary: We identify and study the most important risk factors using Explainable Boosting Machine (EBM), a glass box model, in order to gain intelligibility.
While using the interpretability of EBM's to reveal surprising insights into the features contributing to risk, our experiments show EBMs match the accuracy of other black-box ML methods.
- Score: 9.705885698264005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most pregnancies and births result in a good outcome, but complications are
not uncommon and when they do occur, they can be associated with serious
implications for mothers and babies. Predictive modeling has the potential to
improve outcomes through better understanding of risk factors, heightened
surveillance, and more timely and appropriate interventions, thereby helping
obstetricians deliver better care. For three types of complications we identify
and study the most important risk factors using Explainable Boosting Machine
(EBM), a glass box model, in order to gain intelligibility: (i) Severe Maternal
Morbidity (SMM), (ii) shoulder dystocia, and (iii) preterm preeclampsia. While
using the interpretability of EBM's to reveal surprising insights into the
features contributing to risk, our experiments show EBMs match the accuracy of
other black-box ML methods such as deep neural nets and random forests.
Related papers
- Use of What-if Scenarios to Help Explain Artificial Intelligence Models for Neonatal Health [6.102406188211489]
Early detection of intrapartum risk enables interventions to potentially prevent or mitigate adverse labor outcomes such as cerebral palsy.
We propose "Artificial Intelligence (AI) for Modeling and Explaining Neonatal Health" (AIMEN)
A deep learning framework that not only predicts adverse labor outcomes from maternal, fetal, obstetrical, and intrapartum risk factors but also provides the model's reasoning behind the predictions made.
arXiv Detail & Related papers (2024-10-12T20:21:00Z) - Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval [61.70489848327436]
KARE is a novel framework that integrates knowledge graph (KG) community-level retrieval with large language models (LLMs) reasoning.
Extensive experiments demonstrate that KARE outperforms leading models by up to 10.8-15.0% on MIMIC-III and 12.6-12.7% on MIMIC-IV for mortality and readmission predictions.
arXiv Detail & Related papers (2024-10-06T18:46:28Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Interpretable Survival Analysis for Heart Failure Risk Prediction [50.64739292687567]
We propose a novel survival analysis pipeline that is both interpretable and competitive with state-of-the-art survival models.
Our pipeline achieves state-of-the-art performance and provides interesting and novel insights about risk factors for heart failure.
arXiv Detail & Related papers (2023-10-24T02:56:05Z) - Interpretable Predictive Models to Understand Risk Factors for Maternal
and Fetal Outcomes [17.457683367235536]
We identify and study the most important risk factors for four types of pregnancy complications: severe maternal morbidity, shoulder dystocia, preterm preeclampsia, and antepartum stillbirth.
We use an Explainable Boosting Machine (EBM), a high-accuracy glass-box learning method, for prediction and identification of important risk factors.
arXiv Detail & Related papers (2023-10-16T09:17:10Z) - Closing the Gap in High-Risk Pregnancy Care Using Machine Learning and Human-AI Collaboration [8.36613277875556]
High-risk pregnancy is a pregnancy complicated by factors that can adversely affect the outcomes of the mother or the infant.
This work presents the implementation of a real-world ML-based system to assist care managers in identifying pregnant patients at risk of complications.
arXiv Detail & Related papers (2023-05-26T21:08:49Z) - Predicting Adverse Neonatal Outcomes for Preterm Neonates with
Multi-Task Learning [51.487856868285995]
We first analyze the correlations between three adverse neonatal outcomes and then formulate the diagnosis of multiple neonatal outcomes as a multi-task learning (MTL) problem.
In particular, the MTL framework contains shared hidden layers and multiple task-specific branches.
arXiv Detail & Related papers (2023-03-28T00:44:06Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Three Applications of Conformal Prediction for Rating Breast Density in
Mammography [5.634287524779709]
Assessing mammographic breast density is clinically important as the denser breasts have higher risk and are more likely to occlude tumors.
There has been increased interest in the development of deep learning methods for mammographic breast density assessment.
Despite deep learning having demonstrated impressive performance in several prediction tasks for applications in mammography, clinical deployment of deep learning systems in still relatively rare.
arXiv Detail & Related papers (2022-06-23T23:03:24Z) - Simultaneous Estimation of X-ray Back-Scatter and Forward-Scatter using
Multi-Task Learning [59.17383024536595]
Back-scatter significantly contributes to patient (skin) dose during complicated interventions.
Forward-scattered radiation reduces contrast in projection images and introduces artifacts in 3-D reconstructions.
We propose a novel approach combining conventional techniques with learning-based methods to simultaneously estimate the forward-scatter reaching the detector.
arXiv Detail & Related papers (2020-07-08T10:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.