A Kalman Filter Based Framework for Monitoring the Performance of
In-Hospital Mortality Prediction Models Over Time
- URL: http://arxiv.org/abs/2402.06812v1
- Date: Fri, 9 Feb 2024 22:27:29 GMT
- Title: A Kalman Filter Based Framework for Monitoring the Performance of
In-Hospital Mortality Prediction Models Over Time
- Authors: Jiacheng Liu, Lisa Kirkland, Jaideep Srivastava
- Abstract summary: We propose a Kalman filter based framework with extrapolated variance adjusted for the total number of samples and the number of positive samples during different time periods.
Our prediction model is not significantly affected by the evolution of the disease, improved treatments and changes in hospital operational plans.
- Score: 3.5508427067904864
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Unlike in a clinical trial, where researchers get to determine the least
number of positive and negative samples required, or in a machine learning
study where the size and the class distribution of the validation set is static
and known, in a real-world scenario, there is little control over the size and
distribution of incoming patients. As a result, when measured during different
time periods, evaluation metrics like Area under the Receiver Operating Curve
(AUCROC) and Area Under the Precision-Recall Curve(AUCPR) may not be directly
comparable. Therefore, in this study, for binary classifiers running in a long
time period, we proposed to adjust these performance metrics for sample size
and class distribution, so that a fair comparison can be made between two time
periods. Note that the number of samples and the class distribution, namely the
ratio of positive samples, are two robustness factors which affect the variance
of AUCROC. To better estimate the mean of performance metrics and understand
the change of performance over time, we propose a Kalman filter based framework
with extrapolated variance adjusted for the total number of samples and the
number of positive samples during different time periods. The efficacy of this
method is demonstrated first on a synthetic dataset and then retrospectively
applied to a 2-days ahead in-hospital mortality prediction model for COVID-19
patients during 2021 and 2022. Further, we conclude that our prediction model
is not significantly affected by the evolution of the disease, improved
treatments and changes in hospital operational plans.
Related papers
- Difference-in-Differences with Time-varying Continuous Treatments using Double/Debiased Machine Learning [0.0]
We propose a difference-in-differences (DiD) method for continuous treatment and multiple time periods.
Our framework assesses the average treatment effect on the treated (ATET) when comparing two non-zero treatment doses.
arXiv Detail & Related papers (2024-10-28T15:10:43Z) - Explain Variance of Prediction in Variational Time Series Models for
Clinical Deterioration Prediction [4.714591319660812]
We propose a novel view of clinical variable measurement frequency from a predictive modeling perspective.
The prediction variance is estimated by sampling the conditional hidden space in variational models and can be approximated deterministically by delta's method.
We tested our ideas on a public ICU dataset with deterioration prediction task and study the relation between variance SHAP and measurement time intervals.
arXiv Detail & Related papers (2024-02-09T22:14:40Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - CDSM -- Casual Inference using Deep Bayesian Dynamic Survival Models [3.9169188005935927]
We have developed a causal dynamic survival model (CDSM) that uses the potential outcomes framework with the Bayesian recurrent sub-networks to estimate the difference in survival curves.
Using simulated survival datasets, CDSM has shown good causal effect estimation performance across scenarios of sample dimension, event rate, confounding and overlapping.
arXiv Detail & Related papers (2021-01-26T09:15:49Z) - Bayesian prognostic covariate adjustment [59.75318183140857]
Historical data about disease outcomes can be integrated into the analysis of clinical trials in many ways.
We build on existing literature that uses prognostic scores from a predictive model to increase the efficiency of treatment effect estimates.
arXiv Detail & Related papers (2020-12-24T05:19:03Z) - Increasing the efficiency of randomized trial estimates via linear
adjustment for a prognostic score [59.75318183140857]
Estimating causal effects from randomized experiments is central to clinical research.
Most methods for historical borrowing achieve reductions in variance by sacrificing strict type-I error rate control.
arXiv Detail & Related papers (2020-12-17T21:10:10Z) - STELAR: Spatio-temporal Tensor Factorization with Latent Epidemiological
Regularization [76.57716281104938]
We develop a tensor method to predict the evolution of epidemic trends for many regions simultaneously.
STELAR enables long-term prediction by incorporating latent temporal regularization through a system of discrete-time difference equations.
We conduct experiments using both county- and state-level COVID-19 data and show that our model can identify interesting latent patterns of the epidemic.
arXiv Detail & Related papers (2020-12-08T21:21:47Z) - Tolerance and Prediction Intervals for Non-normal Models [0.0]
A prediction interval covers a future observation from a random process in repeated sampling.
A tolerance interval covers a population percentile in repeated sampling and is often based on a pivotal quantity.
arXiv Detail & Related papers (2020-11-23T17:48:09Z) - Tracking disease outbreaks from sparse data with Bayesian inference [55.82986443159948]
The COVID-19 pandemic provides new motivation for estimating the empirical rate of transmission during an outbreak.
Standard methods struggle to accommodate the partial observability and sparse data common at finer scales.
We propose a Bayesian framework which accommodates partial observability in a principled manner.
arXiv Detail & Related papers (2020-09-12T20:37:33Z) - Joint Prediction and Time Estimation of COVID-19 Developing Severe
Symptoms using Chest CT Scan [49.209225484926634]
We propose a joint classification and regression method to determine whether the patient would develop severe symptoms in the later time.
To do this, the proposed method takes into account 1) the weight for each sample to reduce the outliers' influence and explore the problem of imbalance classification.
Our proposed method yields 76.97% of accuracy for predicting the severe cases, 0.524 of the correlation coefficient, and 0.55 days difference for the converted time.
arXiv Detail & Related papers (2020-05-07T12:16:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.